*** semanticdesign has quit IRC | 00:50 | |
*** mcatanzaro has quit IRC | 01:07 | |
*** tristan has joined #buildstream | 07:24 | |
*** bochecha has quit IRC | 07:28 | |
*** bochecha has joined #buildstream | 07:29 | |
*** bochecha has quit IRC | 07:38 | |
*** bochecha has joined #buildstream | 07:39 | |
*** bochecha has quit IRC | 08:10 | |
*** bochecha has joined #buildstream | 08:20 | |
*** bochecha_ has joined #buildstream | 08:44 | |
*** bochecha has quit IRC | 08:46 | |
*** bochecha_ is now known as bochecha | 08:46 | |
*** jude has joined #buildstream | 09:43 | |
*** bethw has joined #buildstream | 09:50 | |
*** jonathanmaw has joined #buildstream | 09:56 | |
*** WSalmon has joined #buildstream | 10:21 | |
*** ssam2 has joined #buildstream | 10:21 | |
*** WSalmon_ has joined #buildstream | 10:26 | |
*** WSalmon has quit IRC | 10:27 | |
* ssam2 merges canonical pull URLs branch | 10:29 | |
gitlab-br-bot | buildstream: issue #112 ("Artifact configuration is confusing and fragile, need canonical push/pull urls.") changed state ("closed") https://gitlab.com/BuildStream/buildstream/issues/112 | 10:29 |
---|---|---|
gitlab-br-bot | push on buildstream@master (by Sam Thursfield): 4 commits (last: Check connectivity to remote cache on `bst push`) https://gitlab.com/BuildStream/buildstream/commit/4eb33736e1171734a8f4ed93976c0399aa8d85b3 | 10:29 |
gitlab-br-bot | buildstream: merge request (sam/canonical-push-urls->master: Canonical push/pull URLs) #158 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/158 | 10:29 |
gitlab-br-bot | buildstream: Sam Thursfield deleted branch sam/canonical-push-urls | 10:29 |
tristan | ssam2, I gave it a bit of a push this afternoon | 10:30 |
tristan | ssam2, it turns out I think it was a gitlab bug: https://gitlab.com/BuildStream/buildstream/-/jobs/41482176 | 10:31 |
tristan | or, a weird bug in buildstream causing a hang somewhere ? | 10:31 |
ssam2 | the slowness, you mean ? | 10:31 |
ssam2 | right | 10:31 |
tristan | It *does* legitimately take a long time, because cloud suckiness with I/O | 10:31 |
tristan | that pipeline is *still* running | 10:31 |
tristan | I just created a new one, and the new one passed | 10:32 |
ssam2 | yeah, it seems to take a long time caching and uncaching artifacts | 10:33 |
ssam2 | maybe we can use a smaller base, though | 10:33 |
tristan | there are a few bugs filed for integration tests suckiness too | 10:33 |
ssam2 | Duration: 1120 minutes 58 seconds | 10:34 |
ssam2 | that's quite impressive ! | 10:34 |
tristan | but cloud suckiness takes the cake really when it comes to I/O | 10:34 |
ssam2 | but must be an outlier | 10:34 |
tristan | ssam2, that is still running, it is *certainly* not being slow | 10:34 |
tristan | that is hung | 10:34 |
tristan | its either a gitlab bug or a buildstream bug | 10:34 |
ssam2 | right | 10:34 |
ssam2 | yeah | 10:34 |
tristan | but it's been at the same place for many many hours | 10:34 |
ssam2 | another advantage of using our own runners is that when we get stuck, we can actually ssh into them | 10:34 |
ssam2 | *when they get stuck | 10:35 |
ssam2 | so we would know what the issue was | 10:35 |
ssam2 | jjardon[m] ... around ? | 10:35 |
tristan | I recall last year doing this other conversion thing which downloaded a lot of gits | 10:35 |
tristan | and I made a change which doubled the I/O | 10:35 |
tristan | it made my process take like 10 minutes instead of 7 | 10:35 |
tristan | but I got a bug report "OMG it's taking 2 and a half hours now !" | 10:35 |
tristan | I ran after it for almost a week | 10:36 |
tristan | only to find that, it was in fact taking just over an hour before hand on gitlab, and doubling the I/O doubled that | 10:36 |
tristan | working on the cloud, without dedicated storage on the same machine, is just a crazy idea to begin with | 10:37 |
* tristan cancels the blocked gitlab task now | 10:38 | |
tlater | tristan: So, I think the tracking changes branch is alright now | 10:42 |
tlater | I just think my changes to make inconsistent loading partial could be made a bit neater | 10:42 |
tlater | I'm unsure how to go about that though :/ | 10:42 |
tristan | tlater, are we able to know which modules are tracking targets ? | 10:44 |
jjardon[m] | ssam2 hi | 10:44 |
tlater | "Modules"? | 10:45 |
tristan | tlater, if so, we need to force those to be inconsistent instead of forcing a cache key calculation | 10:45 |
tristan | elements | 10:45 |
ssam2 | jjardon[m], what's the deal with the Baserock gitlab runners now ? and could BuildStream use them ? | 10:45 |
tristan | sorry, terms flying all over the place | 10:45 |
tlater | tristan: Yup, that's possible now | 10:45 |
tristan | tlater, so that seems right, for today and tomorrow, I dont think I can be present in the time overlap with UK | 10:45 |
tlater | And yes, they are forced inconsistent, and we don't force cache key calculation anymore :) | 10:46 |
tristan | last few days before trip, have bunch of meetings before I go | 10:46 |
tlater | Ah, alright | 10:46 |
tristan | never force cache key calculation ? | 10:46 |
jjardon[m] | ssam2 I do not see a problem with that; I will try to make them available for buildstream | 10:46 |
tlater | As part of the tracking changes anyway. | 10:46 |
tlater | "Never" is perhaps a bit strong | 10:47 |
ssam2 | jjardon[m], great!!! | 10:47 |
tristan | tlater, that will cause my brain to hurt I think, first it's important for me to remember, understand why we do element._cached() at load time, I think it's so that it's all calculated once before launch under normal circumstances | 10:47 |
tristan | and also that we can potentially time that activity, and resolve state early | 10:47 |
ssam2 | jjardon[m], i might be able to enable them myself, i just didn't want to do it without asking in case it might cost you lost of money or something | 10:48 |
tristan | perhaps also we want it all resolved in the main thread before forking and doing it redundantly in child tasks | 10:48 |
tristan | tlater, better to understand that before removing the loop which does element._cached() | 10:48 |
tristan | tlater, if I understand correctly, or maybe your branch doesnt do that, I dont know | 10:48 |
tristan | tlater, what I recommend now, is to do a really, really great test case for it | 10:48 |
jjardon[m] | ssam2 go for it, you should have to have all the powers :) | 10:48 |
ssam2 | ok cool :-) | 10:49 |
tlater | tristan: It follows almost exactly what we had beforehand, except that it only avoids doing element._cached() for elements specified as tracking targets | 10:49 |
tristan | tlater, as I described in the bug; be able to detect if an element was tracked or not (this probably entails ensuring there are 2 commits in every generated source repo) | 10:49 |
jjardon[m] | (I used to pay them, now they are sponsored by Codethink) | 10:50 |
tristan | tlater, I think we have tracking tests which already demo how we can check that from test cases | 10:50 |
ssam2 | jjardon[m], ah OK. should be fine then, BuildStream tests take much less CPU time than the Baserock ones we have been running | 10:50 |
tristan | tlater, no need to spam the tests with multiple source types, I guess just using the git source and git repo test scaffold for all generated elements in the tests is good enough | 10:51 |
tlater | tristan: I had a look, but I couldn't find any of that... I parse buildstream output in my test case now, and collect every success we report. | 10:51 |
jjardon[m] | Yeah, that reminds me I have to send v2 of my MR to speedup/remove systems | 10:51 |
tlater | Is that enough or are you weary of accidental misreporting? | 10:52 |
tristan | :-/ | 10:52 |
ssam2 | jjardon[m], yeah I was a little suspicious about building everything in parallel, instead of first building 1 system and then building all the others | 10:53 |
tristan | Ok so I can see that track.py test uses a different approach, by testing element state | 10:53 |
tristan | tlater, I am weary of parsing any stdout in tests at all... but we may be heading down that path for benchmarking if Angelos is indeed following the path we discussed | 10:54 |
jjardon[m] | Yeah, let me propose another thing | 10:54 |
* tristan never got a reply on that benchmarking thread | 10:54 | |
* tlater will modify his helper function to check for commits instead then ;) | 10:55 | |
tristan | tlater, rather I suppose you want to be checking for "track:element-name.bst" tasks in the log | 10:55 |
tristan | success is rather irrelevant, you want to know; which elements were *actually* tracked | 10:55 |
tristan | if anything fails, the whole test fails anyway | 10:55 |
tristan | tlater, I'm not sure it's gonna be that easy to check for commits either, I was making an assumption based on thinking that's an approach we already had | 10:56 |
tristan | but reaching deep into implementation details is not a good idea from tests | 10:56 |
tristan | tlater, I wonder, since sources have an API for get/set ref... if we should be able to have `bst show --format "%{refs}"` | 10:57 |
tristan | or similar | 10:57 |
tristan | that might give us something useful, and also an ability to check that from test cases, at the same time ? | 10:57 |
* tlater wonders what use case that has outside of test cases | 10:58 | |
tristan | not sure, it's a fair point | 10:59 |
tristan | I think I want a lot more features from `bst show` *anyway*, but the current simple --format approach is not the best | 10:59 |
tristan | I'm not sure how to introspect complex data types | 10:59 |
tristan | like sources, they are actually lists and not "a thing" | 11:00 |
tristan | meh | 11:00 |
tristan | tlater, parsing is fine, but please parameterized; with a lot of cases which check that the expected things are tracked | 11:00 |
tristan | against a data set where we know that everything is *not* at the latest ref | 11:00 |
tristan | (i.e. all elements *can* be tracked, so the --track-except and such semantics are tested that only the ones we wanted tracked, are) | 11:02 |
* tlater uses the on-the-fly element generation to do this atm | 11:02 | |
tlater | I'll add some more tests, probably a case or two I missed :) | 11:03 |
tristan | tlater, with git, I've added `create_submodule()` (i.e. in testutils/repo/git.py) to test the submodules | 11:04 |
tristan | tlater, it looks like you would *need* to at least have an extra method there to ensure that a generated on-the-fly repo has more than one commit | 11:04 |
tlater | Yeah, that seems sensible | 11:05 |
tristan | currently there is only Repo.create() which ensures the required data creates a repo and only one commit ever happens | 11:05 |
tristan | but not all repo types are backed by a VCS so it's not needed to have that in the Repo() interface | 11:05 |
tristan | (not all *source* types I guess) | 11:06 |
* tristan gotta run... | 11:06 | |
tlater | Alright, ta for points o/ | 11:07 |
tristan | tlater, thanks for thorough test coverage :D | 11:07 |
tristan | if you run out of things to do *cough*,... you could always fix integration-tests to never use the platform but to stage only the sdk with the correct symlinkage :) | 11:08 |
* tlater has a few other things in his backlog too, should be enough ;) | 11:09 | |
*** tristan has quit IRC | 11:11 | |
ssam2 | i've started a CI pipeline for master which is running on baserock-manager-runner2 | 11:32 |
ssam2 | let's see if that is significantly faster than the one currently running on the GitLab runners | 11:32 |
ssam2 | https://gitlab.com/BuildStream/buildstream/pipelines/14282486 <-- gitlab free runners | 11:32 |
ssam2 | https://gitlab.com/BuildStream/buildstream/pipelines/14285067 <-- baserock runners | 11:32 |
*** tristan has joined #buildstream | 11:37 | |
ssam2 | ssh://artifacts@gnome7.codethink.co.uk:10007/artifacts and ssh://ostree@ostree.baserock.org:22200/cache are now working with the canonical pull URLs changes that are in master | 12:00 |
juergbi | great | 12:05 |
gitlab-br-bot | push on buildstream@tracking-changes (by Tristan Maat): 1 commit (last: Fix tests) https://gitlab.com/BuildStream/buildstream/commit/874ede73d57c8f98b6567be73e87c163292bf8e2 | 12:27 |
gitlab-br-bot | buildstream: merge request (tracking-changes->master: Tracking changes) #119 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/119 | 12:27 |
*** bochecha has quit IRC | 12:53 | |
*** bethw has quit IRC | 12:56 | |
*** bochecha has joined #buildstream | 13:07 | |
gitlab-br-bot | push on buildstream@juerg/recursive-pipelines (by Jürg Billeter): 18 commits (last: setup.py: Add pytest-xdist dependency) https://gitlab.com/BuildStream/buildstream/commit/841f4e9e47ed4d7d14d8526a657600a6a67d7312 | 13:14 |
*** bethw has joined #buildstream | 14:05 | |
jjardon[m] | ssam2: tristan baserock runners enabled for buildstream; let me know if something stop working | 14:11 |
ssam2 | thanks | 14:11 |
ssam2 | the first pipeline i ran didn't seem faster, but it didn't have any cache | 14:11 |
ssam2 | so need to run another one with cache and compare with one that ran on the GitLab free runners | 14:12 |
ssam2 | actually: 2h09 (baserock) vs 2h48 (gitlab) | 14:12 |
ssam2 | so the baserock ones are faster even with cold cache | 14:13 |
ssam2 | let's see how https://gitlab.com/BuildStream/buildstream/pipelines/14292832 goes now that the cache is warm :-) | 14:13 |
tlater | \o/ | 14:15 |
tlater | This is where we figure out that the cache actually takes longer to unpack than it does to run the tests without it. | 14:16 |
jjardon[m] | ssam2: the runners itself are definetely more powerful (more cpu and memory) than the gitlab ones | 14:16 |
jjardon[m] | the cache; what you said :) | 14:17 |
ssam2 | spinning up the runners still takes a while | 14:17 |
ssam2 | didn't we enable an option to keep them around for a while to avoid that ? | 14:17 |
ssam2 | IdleTime = 1800 | 14:18 |
ssam2 | so they should stick around for 30 mins after a build | 14:19 |
ssam2 | it's 38 minutes since my last pipeline finished, I guess :-) | 14:19 |
gitlab-br-bot | push on buildstream@tracking-changes (by Tristan Maat): 11 commits (last: Check connectivity to remote cache on `bst push`) https://gitlab.com/BuildStream/buildstream/commit/4eb33736e1171734a8f4ed93976c0399aa8d85b3 | 14:44 |
gitlab-br-bot | buildstream: merge request (tracking-changes->master: Tracking changes) #119 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/119 | 14:44 |
gitlab-br-bot | push on buildstream@tracking-changes (by Tristan Maat): 2 commits (last: _pipeline.py: Merge the track queue into the scheduler run) https://gitlab.com/BuildStream/buildstream/commit/923b876c355d4d9b8f9a9f85f3e24d242e45ae8d | 14:45 |
gitlab-br-bot | buildstream: merge request (tracking-changes->master: Tracking changes) #119 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/119 | 14:45 |
*** bochecha has quit IRC | 14:47 | |
gitlab-br-bot | push on buildstream@142-potentially-printing-provenance-more-than-once-in-loaderrors (by Tristan Maat): 20 commits (last: load.py: Add test to check intersection exceptions) https://gitlab.com/BuildStream/buildstream/commit/dc3169ec201db997574025963736c85aea71befc | 14:47 |
*** ssam2 has quit IRC | 14:47 | |
*** ssam2 has joined #buildstream | 14:49 | |
jonathanmaw | tlater: when you were working on third-party plugin loading, did you know pluginbase.PluginSource can be created with persist=True, which claims to stop plugins getting garbage collected? | 14:58 |
* tlater eventually set that flag | 14:58 | |
jonathanmaw | I've seen some comments suggesting the behaviour of plugin loading is based around preventing them from being garbage collected | 14:58 |
tlater | I think | 14:58 |
tlater | Otherwise I decided against that for some reason | 14:58 |
jonathanmaw | tlater: probably the latter, since I can't see it here | 14:59 |
*** bochecha has joined #buildstream | 15:15 | |
tristan | jjardon[m], ssam2; fwiw; I dont think cpu and memory are as much as a concern as "knowing we have dedicated storage on a dedicated machine", really the main bottleneck with gitlab is I/O | 15:26 |
jjardon[m] | tristan: the baserock runners are exactly the same as the gitlab ones (they are in "the cloud") if thats what you are asking | 15:31 |
jjardon[m] | same provider as well (digitalocean) | 15:31 |
tristan | :-S | 15:33 |
tristan | if they are dedicated machines "in the cloud" where the CPU and memory is always on the same machine as the actual hard disks / SSD, that's where we win | 15:33 |
tristan | maybe things will be a bit faster with these runners, but it's really having the storage close at hand that counts | 15:34 |
tristan | ssam2, is it possible your canonical urls branch has anything to do with the performance regression I'm seeing ? ... currently I see a large unexplained pause in between "Loading ..." and "Resolving ..." | 15:36 |
tristan | at startup time | 15:37 |
tristan | seems to be quite new | 15:37 |
tristan | ssam2, maybe you moved the part where we check if we are allowed to push artifacts ? | 15:38 |
tristan | ssam2, I notice now that I also dont get any message about checking if I can push to the cache, yet I do have a push queue | 15:38 |
tristan | ooops | 15:39 |
tristan | and worse | 15:39 |
tristan | ssam2, https://bpaste.net/show/5284ac59f7e9 | 15:39 |
tristan | ssam2, I'm getting that stack trace on *every push* now | 15:39 |
tristan | this looks a bit untested :-S | 15:40 |
ssam2 | ouch, yes | 15:40 |
tristan | I think I have a fix for the latter stack traces at least | 15:42 |
* tristan is testing now | 15:42 | |
tristan | wanted to get the latest into the cache with the new push stuff | 15:42 |
ssam2 | I changed the stuff about checking for push access because now, if your cache is a ssh:// URL, we have to contact it right away | 15:44 |
ssam2 | in order to get hold of the corresponding pull URL | 15:44 |
ssam2 | so we do still check, but we don't announce it now | 15:44 |
ssam2 | perhaps we should have a "contacting cache at ssh://example.com" message though | 15:45 |
tristan | and we do it after Loading and before Resolving ? | 15:45 |
ssam2 | as it could indeed hang horribly in cases where network is bad | 15:45 |
ssam2 | we have to do it before fetching the artifact list | 15:45 |
ssam2 | so probably | 15:45 |
gitlab-br-bot | push on buildstream@master (by Tristan Van Berkom): 1 commit (last: _artifactcache.py: Fixed stack trace regression in pushing of artifacts.) https://gitlab.com/BuildStream/buildstream/commit/628e9a23f8b9d14f0216f85240b5ed148a08b117 | 15:47 |
* tristan fixed the stack trace and at least it's pushing now | 15:47 | |
tristan | ssam2, if you can fix the awkward hang at startup of `bst build` that would also be cool | 15:47 |
ssam2 | i don't see much of a hang | 15:48 |
ssam2 | but, i'm closer to the cache than you | 15:48 |
*** bethw has quit IRC | 15:48 | |
tristan | we will probably be removing the tickers in favor of earlier logging, which will help; either that comes now or it comes when juergbi lands recursive pipelines | 15:48 |
tristan | ssam2, you are exactly the wrong person to observe, you are actually the closest to the cache in the whole world | 15:49 |
tristan | for me, it hangs for like 2 or 3 seconds | 15:49 |
ssam2 | the only way to fix it is to make it parallel | 15:49 |
ssam2 | which should be fairly simple using multiprocessing, right ? | 15:49 |
tristan | I dont mind that the time is spent | 15:49 |
tristan | what I mind is that the user wonders what the hell is going on | 15:49 |
tristan | it's sloppy, we're not even telling them what we're doing, just hanging | 15:49 |
ssam2 | agreed | 15:49 |
tristan | (we did have the same hang before, it was just timed and nicely explained in the log, that's all) | 15:50 |
ssam2 | would be really nice if we could make it parallel though; the other work going on is disk-io-bound | 15:51 |
tristan | I'm not sure it's feasible | 15:51 |
tristan | well | 15:51 |
tristan | part of it | 15:51 |
tristan | I mean, we need to download the summary of what is in the remote cache, in order to construct a build plan | 15:52 |
ssam2 | yeah. we don't do that til after 'resolving', though | 15:52 |
tristan | at least that part needs to happen before we start building anyway | 15:52 |
tristan | note, I also change things before so that we only do that thing in commands where we need it | 15:53 |
tristan | we used to hang for like 2 or 3 seconds on `bst show` | 15:53 |
tristan | *changed | 15:53 |
tristan | that was a few weeks ago anyway | 15:53 |
ssam2 | right; we need to avoid talking to the remote cache at all in that case | 15:54 |
tristan | actually for bst show I added a flag specifically for that | 15:54 |
ssam2 | will see about fixing that | 15:54 |
tristan | because we do have a "downloadable" state to show | 15:54 |
tristan | but anyway, refreshing local knowledge of downloadable state is now explicit for that reason | 15:55 |
ssam2 | makes sense | 15:55 |
tristan | code is starting to be in need of fondling and caressing :) | 15:55 |
tristan | too many features being banged in, needs love :D | 15:56 |
ssam2 | looking at the code here, the issue is that ostreecache sets itself up in its constructor | 15:57 |
ssam2 | and that requires the network access | 15:57 |
ssam2 | so need to have some kind of separate ArtifactCache.initialize() function | 15:58 |
juergbi | ticker removal might also land after recursive pipelines. trying to get a it working reasonably well without it to avoid blocking recursive pipelines | 16:00 |
tristan | juergbi, sure; I was under the impression that the recursive loads implied something of that nature, though | 16:04 |
tristan | but I havent seen it yet :) | 16:04 |
juergbi | i have it working with existing ticker. it's not ideal but i think it should be ok | 16:05 |
juergbi | or rather, with minimally extended ticker | 16:06 |
juergbi | will push a branch update shortly | 16:06 |
*** WSalmon_ has quit IRC | 16:12 | |
gitlab-br-bot | push on buildstream@juerg/recursive-pipelines (by Jürg Billeter): 14 commits (last: _artifactcache.py: Fixed stack trace regression in pushing of artifacts.) https://gitlab.com/BuildStream/buildstream/commit/628e9a23f8b9d14f0216f85240b5ed148a08b117 | 16:20 |
*** bethw has joined #buildstream | 16:27 | |
gitlab-br-bot | push on buildstream@sam/artifact-delay-init (by Sam Thursfield): 1 commit (last: Only initialize remote artifact cache connections if needed) https://gitlab.com/BuildStream/buildstream/commit/b075dd17cf0f4d7bdd0c22726a3a2df5a55d8498 | 16:42 |
gitlab-br-bot | buildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/159 | 16:43 |
gitlab-br-bot | push on buildstream@juerg/recursive-pipelines (by Jürg Billeter): 1 commit (last: Add junction support for subprojects) https://gitlab.com/BuildStream/buildstream/commit/67011274db3e2cf9f6188ec6a134ed8691043601 | 16:43 |
gitlab-br-bot | buildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/159 | 16:44 |
juergbi | ssam2: we should probably coordinate work on supporting multiple artifact servers (needed for subprojects with project-specific artifact urls) | 16:44 |
ssam2 | right, ok | 16:45 |
ssam2 | i made a start already, although it'll need a little rebasing | 16:45 |
ssam2 | if you need it soon, perhaps it makes sense to finish up the 'core' changes quickly, and worry later about UI and testing | 16:46 |
gitlab-br-bot | buildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/159 | 16:47 |
juergbi | your plan is to land it shortly after 1.0 as well? | 16:47 |
ssam2 | i guess | 16:47 |
juergbi | i still have to finish test and documentation aspects of recursive pipelines, which can mostly be done independently | 16:47 |
ssam2 | it'll be backwards compatible with what we have in master now that canonical-pull-urls is merged, so yes, probably best to wait til after 1.0 | 16:48 |
juergbi | the main thing would be that we keep both use cases in mind | 16:48 |
ssam2 | yeah | 16:48 |
ssam2 | the model I had was, there's still just one ArtifactCache object for the pipeline | 16:48 |
ssam2 | but that now has multiple remotes | 16:48 |
juergbi | that sounds good. multiple ArtifactCache instances doesn't look like a good idea | 16:49 |
ssam2 | yeah, it just pushes more work onto the Pipeline object | 16:49 |
juergbi | what i'd need is one group/list of remotes for each project | 16:49 |
ssam2 | ok, that's a bit more complex | 16:50 |
juergbi | i can do it on top of your branch later or we can try to do it at the same time | 16:50 |
juergbi | but it doesn't sound sensible if i do something without your branch | 16:50 |
ssam2 | especially as we still have global ones, that come from ~/.config/buildstream.conf | 16:50 |
ssam2 | probably makes sense if I finish up the core changes and send a WIP merge request | 16:51 |
juergbi | right, either we copy those in each project's remotes or we have some more flexible structure | 16:51 |
ssam2 | I just made it so ArtifactCache has a `urls` list instead of a single `url` attribute | 16:51 |
juergbi | ok, sounds good. i'll also open a WIP merge request for recursive pipelines so that the rest can also be reviewed if desired | 16:52 |
ssam2 | that could become a dict keyed by project, and then operations could take the name of the project | 16:52 |
juergbi | yes | 16:52 |
juergbi | operations that already take an element implicitly have a project reference. for other operations we'll have to take a look | 16:52 |
ssam2 | ah, they all take an element apart from fetch_remote_refs() really | 16:53 |
ssam2 | and that should probably be fetching *all* remote refs | 16:53 |
juergbi | yes | 16:53 |
juergbi | i don't have an easy to access global project list yet but i'll probably add that | 16:55 |
*** semanticdesign has joined #buildstream | 16:58 | |
ssam2 | it looks like we save over an hour on CI with the new runners, now the cache is hot | 17:04 |
ssam2 | 1h38 vs. 2h48 | 17:04 |
tlater | Sweet :) | 17:04 |
ssam2 | still way too slow, but a step forwards :-) | 17:04 |
gitlab-br-bot | buildstream: merge request (juerg/recursive-pipelines->master: WIP: Junctions and subprojects) #160 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/160 | 17:10 |
tristan | ouch: https://bpaste.net/show/f7c533794a78 | 17:13 |
tristan | looks like SafeHardlinkOps is mounted there still | 17:14 |
tristan | damn fuse | 17:14 |
*** bethw has quit IRC | 17:34 | |
jonathanmaw | only two failed tests now! | 17:42 |
gitlab-br-bot | push on buildstream@sam/artifact-delay-init (by Sam Thursfield): 1 commit (last: Only initialize remote artifact cache connections if needed) https://gitlab.com/BuildStream/buildstream/commit/f17ef1e47cfce604bcd93373eae9646a96e9122d | 17:42 |
gitlab-br-bot | buildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/159 | 17:42 |
jonathanmaw | test_conflict_source and test_conflict_element still fail, because I've moved the logic that checks for duplicate plugins into project | 17:43 |
*** jonathanmaw has quit IRC | 18:01 | |
*** ssam2 has quit IRC | 18:47 | |
gitlab-br-bot | buildstream: merge request (sam/artifact-delay-init->master: Only initialize remote artifact cache connections if needed) #159 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/159 | 19:21 |
gitlab-br-bot | push on buildstream@master (by Tristan Van Berkom): 1 commit (last: Only initialize remote artifact cache connections if needed) https://gitlab.com/BuildStream/buildstream/commit/f17ef1e47cfce604bcd93373eae9646a96e9122d | 19:21 |
gitlab-br-bot | buildstream: Sam Thursfield deleted branch sam/artifact-delay-init | 19:21 |
*** WSalmon_ has joined #buildstream | 19:48 | |
*** WSalmon_ has quit IRC | 19:52 | |
*** semanticdesign has quit IRC | 20:22 | |
*** bochecha has quit IRC | 20:50 | |
*** valentind has joined #buildstream | 22:09 | |
*** bethw has joined #buildstream | 22:18 | |
*** bethw has quit IRC | 22:58 | |
*** tristan has quit IRC | 23:17 | |
gitlab-br-bot | buildstream: merge request (fix-compose-delete-with-symlink-in-path->master: Remove non canonical path from manifest after integration commands in compose plugin.) #161 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/161 | 23:18 |
*** valentind has quit IRC | 23:33 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!