IRC logs for #buildstream for Thursday, 2017-11-23

*** semanticdesign has quit IRC00:50
*** mcatanzaro has quit IRC01:07
*** tristan has joined #buildstream07:24
*** bochecha has quit IRC07:28
*** bochecha has joined #buildstream07:29
*** bochecha has quit IRC07:38
*** bochecha has joined #buildstream07:39
*** bochecha has quit IRC08:10
*** bochecha has joined #buildstream08:20
*** bochecha_ has joined #buildstream08:44
*** bochecha has quit IRC08:46
*** bochecha_ is now known as bochecha08:46
*** jude has joined #buildstream09:43
*** bethw has joined #buildstream09:50
*** jonathanmaw has joined #buildstream09:56
*** WSalmon has joined #buildstream10:21
*** ssam2 has joined #buildstream10:21
*** WSalmon_ has joined #buildstream10:26
*** WSalmon has quit IRC10:27
* ssam2 merges canonical pull URLs branch10:29
gitlab-br-botbuildstream: issue #112 ("Artifact configuration is confusing and fragile, need canonical push/pull urls.") changed state ("closed") https://gitlab.com/BuildStream/buildstream/issues/11210:29
gitlab-br-botpush on buildstream@master (by Sam Thursfield): 4 commits (last: Check connectivity to remote cache on `bst push`) https://gitlab.com/BuildStream/buildstream/commit/4eb33736e1171734a8f4ed93976c0399aa8d85b310:29
gitlab-br-botbuildstream: merge request (sam/canonical-push-urls->master: Canonical push/pull URLs) #158 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/15810:29
gitlab-br-botbuildstream: Sam Thursfield deleted branch sam/canonical-push-urls10:29
tristanssam2, I gave it a bit of a push this afternoon10:30
tristanssam2, it turns out I think it was a gitlab bug: https://gitlab.com/BuildStream/buildstream/-/jobs/4148217610:31
tristanor, a weird bug in buildstream causing a hang somewhere ?10:31
ssam2the slowness, you mean ?10:31
ssam2right10:31
tristanIt *does* legitimately take a long time, because cloud suckiness with I/O10:31
tristanthat pipeline is *still* running10:31
tristanI just created a new one, and the new one passed10:32
ssam2yeah, it seems to take a long time caching and uncaching artifacts10:33
ssam2maybe we can use a smaller base, though10:33
tristanthere are a few bugs filed for integration tests suckiness too10:33
ssam2Duration: 1120 minutes 58 seconds10:34
ssam2that's quite impressive !10:34
tristanbut cloud suckiness takes the cake really when it comes to I/O10:34
ssam2but must be an outlier10:34
tristanssam2, that is still running, it is *certainly* not being slow10:34
tristanthat is hung10:34
tristanits either a gitlab bug or a buildstream bug10:34
ssam2right10:34
ssam2yeah10:34
tristanbut it's been at the same place for many many hours10:34
ssam2another advantage of using our own runners is that when we get stuck, we can actually ssh into them10:34
ssam2*when they get stuck10:35
ssam2so we would know what the issue was10:35
ssam2jjardon[m] ... around ?10:35
tristanI recall last year doing this other conversion thing which downloaded a lot of gits10:35
tristanand I made a change which doubled the I/O10:35
tristanit made my process take like 10 minutes instead of 710:35
tristanbut I got a bug report "OMG it's taking 2 and a half hours now !"10:35
tristanI ran after it for almost a week10:36
tristanonly to find that, it was in fact taking just over an hour before hand on gitlab, and doubling the I/O doubled that10:36
tristanworking on the cloud, without dedicated storage on the same machine, is just a crazy idea to begin with10:37
* tristan cancels the blocked gitlab task now10:38
tlatertristan: So, I think the tracking changes branch is alright now10:42
tlaterI just think my changes to make inconsistent loading partial could be made a bit neater10:42
tlaterI'm unsure how to go about that though :/10:42
tristantlater, are we able to know which modules are tracking targets ?10:44
jjardon[m]ssam2 hi10:44
tlater"Modules"?10:45
tristantlater, if so, we need to force those to be inconsistent instead of forcing a cache key calculation10:45
tristanelements10:45
ssam2jjardon[m], what's the deal with the Baserock gitlab runners now ? and could BuildStream use them ?10:45
tristansorry, terms flying all over the place10:45
tlatertristan: Yup, that's possible now10:45
tristantlater, so that seems right, for today and tomorrow, I dont think I can be present in the time overlap with UK10:45
tlaterAnd yes, they are forced inconsistent, and we don't force cache key calculation anymore :)10:46
tristanlast few days before trip, have bunch of meetings before I go10:46
tlaterAh, alright10:46
tristannever force cache key calculation ?10:46
jjardon[m]ssam2 I do not see a problem with that; I will try to make them available for buildstream10:46
tlaterAs part of the tracking changes anyway.10:46
tlater"Never" is perhaps a bit strong10:47
ssam2jjardon[m], great!!!10:47
tristantlater, that will cause my brain to hurt I think, first it's important for me to remember, understand why we do element._cached() at load time, I think it's so that it's all calculated once before launch under normal circumstances10:47
tristanand also that we can potentially time that activity, and resolve state early10:47
ssam2jjardon[m], i might be able to enable them myself, i just didn't want to do it without asking in case it might cost you lost of money or something10:48
tristanperhaps also we want it all resolved in the main thread before forking and doing it redundantly in child tasks10:48
tristantlater, better to understand that before removing the loop which does element._cached()10:48
tristantlater, if I understand correctly, or maybe your branch doesnt do that, I dont know10:48
tristantlater, what I recommend now, is to do a really, really great test case for it10:48
jjardon[m]ssam2 go for it, you should have to have all the powers :)10:48
ssam2ok cool :-)10:49
tlatertristan: It follows almost exactly what we had beforehand, except that it only avoids doing element._cached() for elements specified as tracking targets10:49
tristantlater, as I described in the bug; be able to detect if an element was tracked or not (this probably entails ensuring there are 2 commits in every generated source repo)10:49
jjardon[m](I used to pay them, now they are sponsored by Codethink)10:50
tristantlater, I think we have tracking tests which already demo how we can check that from test cases10:50
ssam2jjardon[m], ah OK. should be fine then, BuildStream tests take much less CPU time than the Baserock ones we have been running10:50
tristantlater, no need to spam the tests with multiple source types, I guess just using the git source and git repo test scaffold for all generated elements in the tests is good enough10:51
tlatertristan: I had a look, but I couldn't find any of that... I parse buildstream output in my test case now, and collect every success we report.10:51
jjardon[m]Yeah, that reminds me I have to send v2 of my MR to speedup/remove systems10:51
tlaterIs that enough or are you weary of accidental misreporting?10:52
tristan:-/10:52
ssam2jjardon[m], yeah I was a little suspicious about building everything in parallel, instead of first building 1 system and then building all the others10:53
tristanOk so I can see that track.py test uses a different approach, by testing element state10:53
tristantlater, I am weary of parsing any stdout in tests at all... but we may be heading down that path for benchmarking if Angelos is indeed following the path we discussed10:54
jjardon[m]Yeah, let me propose another thing10:54
* tristan never got a reply on that benchmarking thread10:54
* tlater will modify his helper function to check for commits instead then ;)10:55
tristantlater, rather I suppose you want to be checking for "track:element-name.bst" tasks in the log10:55
tristansuccess is rather irrelevant, you want to know; which elements were *actually* tracked10:55
tristanif anything fails, the whole test fails anyway10:55
tristantlater, I'm not sure it's gonna be that easy to check for commits either, I was making an assumption based on thinking that's an approach we already had10:56
tristanbut reaching deep into implementation details is not a good idea from tests10:56
tristantlater, I wonder, since sources have an API for get/set ref... if we should be able to have `bst show --format "%{refs}"`10:57
tristanor similar10:57
tristanthat might give us something useful, and also an ability to check that from test cases, at the same time ?10:57
* tlater wonders what use case that has outside of test cases10:58
tristannot sure, it's a fair point10:59
tristanI think I want a lot more features from `bst show` *anyway*, but the current simple --format approach is not the best10:59
tristanI'm not sure how to introspect complex data types10:59
tristanlike sources, they are actually lists and not "a thing"11:00
tristanmeh11:00
tristantlater, parsing is fine, but please parameterized; with a lot of cases which check that the expected things are tracked11:00
tristanagainst a data set where we know that everything is *not* at the latest ref11:00
tristan(i.e. all elements *can* be tracked, so the --track-except and such semantics are tested that only the ones we wanted tracked, are)11:02
* tlater uses the on-the-fly element generation to do this atm11:02
tlaterI'll add some more tests, probably a case or two I missed :)11:03
tristantlater, with git, I've added `create_submodule()` (i.e. in testutils/repo/git.py) to test the submodules11:04
tristantlater, it looks like you would *need* to at least have an extra method there to ensure that a generated on-the-fly repo has more than one commit11:04
tlaterYeah, that seems sensible11:05
tristancurrently there is only Repo.create() which ensures the required data creates a repo and only one commit ever happens11:05
tristanbut not all repo types are backed by a VCS so it's not needed to have that in the Repo() interface11:05
tristan(not all *source* types I guess)11:06
* tristan gotta run...11:06
tlaterAlright, ta for points o/11:07
tristantlater, thanks for thorough test coverage :D11:07
tristanif you run out of things to do *cough*,... you could always fix integration-tests to never use the platform but to stage only the sdk with the correct symlinkage :)11:08
* tlater has a few other things in his backlog too, should be enough ;)11:09
*** tristan has quit IRC11:11
ssam2i've started a CI pipeline for master which is running on baserock-manager-runner211:32
ssam2let's see if that is significantly faster than the one currently running on the GitLab runners11:32
ssam2https://gitlab.com/BuildStream/buildstream/pipelines/14282486 <-- gitlab free runners11:32
ssam2https://gitlab.com/BuildStream/buildstream/pipelines/14285067 <-- baserock runners11:32
*** tristan has joined #buildstream11:37
ssam2ssh://artifacts@gnome7.codethink.co.uk:10007/artifacts and ssh://ostree@ostree.baserock.org:22200/cache are now working with the canonical pull URLs changes that are in master12:00
juergbigreat12:05
gitlab-br-botpush on buildstream@tracking-changes (by Tristan Maat): 1 commit (last: Fix tests) https://gitlab.com/BuildStream/buildstream/commit/874ede73d57c8f98b6567be73e87c163292bf8e212:27
gitlab-br-botbuildstream: merge request (tracking-changes->master: Tracking changes) #119 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/11912:27
*** bochecha has quit IRC12:53
*** bethw has quit IRC12:56
*** bochecha has joined #buildstream13:07
gitlab-br-botpush on buildstream@juerg/recursive-pipelines (by Jürg Billeter): 18 commits (last: setup.py: Add pytest-xdist dependency) https://gitlab.com/BuildStream/buildstream/commit/841f4e9e47ed4d7d14d8526a657600a6a67d731213:14
*** bethw has joined #buildstream14:05
jjardon[m]ssam2: tristan baserock runners enabled for buildstream; let me know if something stop working14:11
ssam2thanks14:11
ssam2the first pipeline i ran didn't seem faster, but it didn't have any cache14:11
ssam2so need to run another one with cache and compare with one that ran on the GitLab free runners14:12
ssam2actually: 2h09 (baserock) vs 2h48 (gitlab)14:12
ssam2so the baserock ones are faster even with cold cache14:13
ssam2let's see how https://gitlab.com/BuildStream/buildstream/pipelines/14292832 goes now that the cache is warm :-)14:13
tlater\o/14:15
tlaterThis is where we figure out that the cache actually takes longer to unpack than it does to run the tests without it.14:16
jjardon[m]ssam2: the runners itself are definetely more powerful (more cpu and memory) than the gitlab ones14:16
jjardon[m]the cache; what you said :)14:17
ssam2spinning up the runners still takes a while14:17
ssam2didn't we enable an option to keep them around for a while to avoid that ?14:17
ssam2    IdleTime = 180014:18
ssam2so they should stick around for 30 mins after a build14:19
ssam2it's 38 minutes since my last pipeline finished, I guess :-)14:19
gitlab-br-botpush on buildstream@tracking-changes (by Tristan Maat): 11 commits (last: Check connectivity to remote cache on `bst push`) https://gitlab.com/BuildStream/buildstream/commit/4eb33736e1171734a8f4ed93976c0399aa8d85b314:44
gitlab-br-botbuildstream: merge request (tracking-changes->master: Tracking changes) #119 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/11914:44
gitlab-br-botpush on buildstream@tracking-changes (by Tristan Maat): 2 commits (last: _pipeline.py: Merge the track queue into the scheduler run) https://gitlab.com/BuildStream/buildstream/commit/923b876c355d4d9b8f9a9f85f3e24d242e45ae8d14:45
gitlab-br-botbuildstream: merge request (tracking-changes->master: Tracking changes) #119 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/11914:45
*** bochecha has quit IRC14:47
gitlab-br-botpush on buildstream@142-potentially-printing-provenance-more-than-once-in-loaderrors (by Tristan Maat): 20 commits (last: load.py: Add test to check intersection exceptions) https://gitlab.com/BuildStream/buildstream/commit/dc3169ec201db997574025963736c85aea71befc14:47
*** ssam2 has quit IRC14:47
*** ssam2 has joined #buildstream14:49
jonathanmawtlater: when you were working on third-party plugin loading, did you know pluginbase.PluginSource can be created with persist=True, which claims to stop plugins getting garbage collected?14:58
* tlater eventually set that flag14:58
jonathanmawI've seen some comments suggesting the behaviour of plugin loading is based around preventing them from being garbage collected14:58
tlaterI think14:58
tlaterOtherwise I decided against that for some reason14:58
jonathanmawtlater: probably the latter, since I can't see it here14:59
*** bochecha has joined #buildstream15:15
tristanjjardon[m], ssam2; fwiw; I dont think cpu and memory are as much as a concern as "knowing we have dedicated storage on a dedicated machine", really the main bottleneck with gitlab is I/O15:26
jjardon[m]tristan: the baserock runners are exactly the same as the gitlab ones (they are in "the cloud") if thats what you are asking15:31
jjardon[m]same provider as well (digitalocean)15:31
tristan:-S15:33
tristanif they are dedicated machines "in the cloud" where the CPU and memory is always on the same machine as the actual hard disks / SSD, that's where we win15:33
tristanmaybe things will be a bit faster with these runners, but it's really having the storage close at hand that counts15:34
tristanssam2, is it possible your canonical urls branch has anything to do with the performance regression I'm seeing ? ... currently I see a large unexplained pause in between "Loading ..." and "Resolving ..."15:36
tristanat startup time15:37
tristanseems to be quite new15:37
tristanssam2, maybe you moved the part where we check if we are allowed to push artifacts ?15:38
tristanssam2, I notice now that I also dont get any message about checking if I can push to the cache, yet I do have a push queue15:38
tristanooops15:39
tristanand worse15:39
tristanssam2, https://bpaste.net/show/5284ac59f7e915:39
tristanssam2, I'm getting that stack trace on *every push* now15:39
tristanthis looks a bit untested :-S15:40
ssam2ouch, yes15:40
tristanI think I have a fix for the latter stack traces at least15:42
* tristan is testing now15:42
tristanwanted to get the latest into the cache with the new push stuff15:42
ssam2I changed the stuff about checking for push access because now, if your cache is a ssh:// URL, we have to contact it right away15:44
ssam2in order to get hold of the corresponding pull URL15:44
ssam2so we do still check, but we don't announce it now15:44
ssam2perhaps we should have a "contacting cache at ssh://example.com" message though15:45
tristanand we do it after Loading and before Resolving ?15:45
ssam2as it could indeed hang horribly in cases where network is bad15:45
ssam2we have to do it before fetching the artifact list15:45
ssam2so probably15:45
gitlab-br-botpush on buildstream@master (by Tristan Van Berkom): 1 commit (last: _artifactcache.py: Fixed stack trace regression in pushing of artifacts.) https://gitlab.com/BuildStream/buildstream/commit/628e9a23f8b9d14f0216f85240b5ed148a08b11715:47
* tristan fixed the stack trace and at least it's pushing now15:47
tristanssam2, if you can fix the awkward hang at startup of `bst build` that would also be cool15:47
ssam2i don't see much of a hang15:48
ssam2but, i'm closer to the cache than you15:48
*** bethw has quit IRC15:48
tristanwe will probably be removing the tickers in favor of earlier logging, which will help; either that comes now or it comes when juergbi lands recursive pipelines15:48
tristanssam2, you are exactly the wrong person to observe, you are actually the closest to the cache in the whole world15:49
tristanfor me, it hangs for like 2 or 3 seconds15:49
ssam2the only way to fix it is to make it parallel15:49
ssam2which should be fairly simple using multiprocessing, right ?15:49
tristanI dont mind that the time is spent15:49
tristanwhat I mind is that the user wonders what the hell is going on15:49
tristanit's sloppy, we're not even telling them what we're doing, just hanging15:49
ssam2agreed15:49
tristan(we did have the same hang before, it was just timed and nicely explained in the log, that's all)15:50
ssam2would be really nice if we could make it parallel though; the other work going on is disk-io-bound15:51
tristanI'm not sure it's feasible15:51
tristanwell15:51
tristanpart of it15:51
tristanI mean, we need to download the summary of what is in the remote cache, in order to construct a build plan15:52
ssam2yeah. we don't do that til after 'resolving', though15:52
tristanat least that part needs to happen before we start building anyway15:52
tristannote, I also change things before so that we only do that thing in commands where we need it15:53
tristanwe used to hang for like 2 or 3 seconds on `bst show`15:53
tristan*changed15:53
tristanthat was a few weeks ago anyway15:53
ssam2right; we need to avoid talking to the remote cache at all in that case15:54
tristanactually for bst show I added a flag specifically for that15:54
ssam2will see about fixing that15:54
tristanbecause we do have a "downloadable" state to show15:54
tristanbut anyway, refreshing local knowledge of downloadable state is now explicit for that reason15:55
ssam2makes sense15:55
tristancode is starting to be in need of fondling and caressing :)15:55
tristantoo many features being banged in, needs love :D15:56
ssam2looking at the code here, the issue is that ostreecache sets itself up in its constructor15:57
ssam2and that requires the network access15:57
ssam2so need to have some kind of separate ArtifactCache.initialize() function15:58
juergbiticker removal might also land after recursive pipelines. trying to get a it working reasonably well without it to avoid blocking recursive pipelines16:00
tristanjuergbi, sure; I was under the impression that the recursive loads implied something of that nature, though16:04
tristanbut I havent seen it yet :)16:04
juergbii have it working with existing ticker. it's not ideal but i think it should be ok16:05
juergbior rather, with minimally extended ticker16:06
juergbiwill push a branch update shortly16:06
*** WSalmon_ has quit IRC16:12
gitlab-br-botpush on buildstream@juerg/recursive-pipelines (by Jürg Billeter): 14 commits (last: _artifactcache.py: Fixed stack trace regression in pushing of artifacts.) https://gitlab.com/BuildStream/buildstream/commit/628e9a23f8b9d14f0216f85240b5ed148a08b11716:20
*** bethw has joined #buildstream16:27
gitlab-br-botpush on buildstream@sam/artifact-delay-init (by Sam Thursfield): 1 commit (last: Only initialize remote artifact cache connections if needed) https://gitlab.com/BuildStream/buildstream/commit/b075dd17cf0f4d7bdd0c22726a3a2df5a55d849816:42
gitlab-br-botbuildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/15916:43
gitlab-br-botpush on buildstream@juerg/recursive-pipelines (by Jürg Billeter): 1 commit (last: Add junction support for subprojects) https://gitlab.com/BuildStream/buildstream/commit/67011274db3e2cf9f6188ec6a134ed869104360116:43
gitlab-br-botbuildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/15916:44
juergbissam2: we should probably coordinate work on supporting multiple artifact servers (needed for subprojects with project-specific artifact urls)16:44
ssam2right, ok16:45
ssam2i made a start already, although it'll need a little rebasing16:45
ssam2if you need it soon, perhaps it makes sense to finish up the 'core' changes quickly, and worry later about UI and testing16:46
gitlab-br-botbuildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/15916:47
juergbiyour plan is to land it shortly after 1.0 as well?16:47
ssam2i guess16:47
juergbii still have to finish test and documentation aspects of recursive pipelines, which can mostly be done independently16:47
ssam2it'll be backwards compatible with what we have in master now that canonical-pull-urls is merged, so yes, probably best to wait til after 1.016:48
juergbithe main thing would be that we keep both use cases in mind16:48
ssam2yeah16:48
ssam2the model I had was, there's still just one ArtifactCache object for the pipeline16:48
ssam2but that now has multiple remotes16:48
juergbithat sounds good. multiple ArtifactCache instances doesn't look like a good idea16:49
ssam2yeah, it just pushes more work onto the Pipeline object16:49
juergbiwhat i'd need is one group/list of remotes for each project16:49
ssam2ok, that's a bit more complex16:50
juergbii can do it on top of your branch later or we can try to do it at the same time16:50
juergbibut it doesn't sound sensible if i do something without your branch16:50
ssam2especially as we still have global ones, that come from ~/.config/buildstream.conf16:50
ssam2probably makes sense if I finish up the core changes and send a WIP merge request16:51
juergbiright, either we copy those in each project's remotes or we have some more flexible structure16:51
ssam2I just made it so ArtifactCache has a `urls` list instead of a single `url` attribute16:51
juergbiok, sounds good. i'll also open a WIP merge request for recursive pipelines so that the rest can also be reviewed if desired16:52
ssam2that could become a dict keyed by project, and then operations could take the name of the project16:52
juergbiyes16:52
juergbioperations that already take an element implicitly have a project reference. for other operations we'll have to take a look16:52
ssam2ah, they all take an element apart from fetch_remote_refs() really16:53
ssam2and that should probably be fetching *all* remote refs16:53
juergbiyes16:53
juergbii don't have an easy to access global project list yet but i'll probably add that16:55
*** semanticdesign has joined #buildstream16:58
ssam2it looks like we save over an hour on CI with the new runners, now the cache is hot17:04
ssam21h38 vs. 2h4817:04
tlaterSweet :)17:04
ssam2still way too slow, but a step forwards :-)17:04
gitlab-br-botbuildstream: merge request (juerg/recursive-pipelines->master: WIP: Junctions and subprojects) #160 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/16017:10
tristanouch: https://bpaste.net/show/f7c533794a7817:13
tristanlooks like SafeHardlinkOps is mounted there still17:14
tristandamn fuse17:14
*** bethw has quit IRC17:34
jonathanmawonly two failed tests now!17:42
gitlab-br-botpush on buildstream@sam/artifact-delay-init (by Sam Thursfield): 1 commit (last: Only initialize remote artifact cache connections if needed) https://gitlab.com/BuildStream/buildstream/commit/f17ef1e47cfce604bcd93373eae9646a96e9122d17:42
gitlab-br-botbuildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/15917:42
jonathanmawtest_conflict_source and test_conflict_element still fail, because I've moved the logic that checks for duplicate plugins into project17:43
*** jonathanmaw has quit IRC18:01
*** ssam2 has quit IRC18:47
gitlab-br-botbuildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/15919:21
gitlab-br-botpush on buildstream@master (by Tristan Van Berkom): 1 commit (last: Only initialize remote artifact cache connections if needed) https://gitlab.com/BuildStream/buildstream/commit/f17ef1e47cfce604bcd93373eae9646a96e9122d19:21
gitlab-br-botbuildstream: Sam Thursfield deleted branch sam/artifact-delay-init19:21
*** WSalmon_ has joined #buildstream19:48
*** WSalmon_ has quit IRC19:52
*** semanticdesign has quit IRC20:22
*** bochecha has quit IRC20:50
*** valentind has joined #buildstream22:09
*** bethw has joined #buildstream22:18
*** bethw has quit IRC22:58
*** tristan has quit IRC23:17
gitlab-br-botbuildstream: merge request (fix-compose-delete-with-symlink-in-path->master: Remove non canonical path from manifest after integration commands in compose plugin.) #161 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/16123:18
*** valentind has quit IRC23:33

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!