*** benschubert has quit IRC | 00:29 | |
*** narispo has quit IRC | 00:33 | |
*** tristan has quit IRC | 00:33 | |
*** slaf has quit IRC | 00:33 | |
*** doras has quit IRC | 00:33 | |
*** walterve[m][m] has quit IRC | 00:33 | |
*** finnb has quit IRC | 00:33 | |
*** dbuch[m] has quit IRC | 00:33 | |
*** jward has quit IRC | 00:33 | |
*** segfault1[m] has quit IRC | 00:33 | |
*** Trevio[m] has quit IRC | 00:33 | |
*** krichter[m] has quit IRC | 00:33 | |
*** Demos[m] has quit IRC | 00:33 | |
*** persia has quit IRC | 00:33 | |
*** tchaik[m] has quit IRC | 00:33 | |
*** WSalmon has quit IRC | 00:33 | |
*** jjardon has quit IRC | 00:33 | |
*** pro[m] has quit IRC | 00:33 | |
*** cgm[m] has quit IRC | 00:33 | |
*** awacheux[m] has quit IRC | 00:33 | |
*** skullone[m] has quit IRC | 00:33 | |
*** DineshBhattarai[m] has quit IRC | 00:33 | |
*** asingh_[m] has quit IRC | 00:33 | |
*** jjardon[m] has quit IRC | 00:33 | |
*** SamThursfield[m] has quit IRC | 00:33 | |
*** kailueke[m] has quit IRC | 00:33 | |
*** mattiasb has quit IRC | 00:33 | |
*** dylan-m has quit IRC | 00:33 | |
*** reuben640[m] has quit IRC | 00:33 | |
*** rafaelff1[m] has quit IRC | 00:33 | |
*** theawless[m] has quit IRC | 00:33 | |
*** connorshea[m] has quit IRC | 00:33 | |
*** abderrahim[m] has quit IRC | 00:33 | |
*** m_22[m] has quit IRC | 00:33 | |
*** albfan[m] has quit IRC | 00:33 | |
*** tlater[m] has quit IRC | 00:33 | |
*** ironfoot has quit IRC | 00:33 | |
*** flatmush has quit IRC | 00:33 | |
*** benbrown has quit IRC | 00:33 | |
*** jswagner has quit IRC | 00:33 | |
*** narispo has joined #buildstream | 00:33 | |
*** slaf has joined #buildstream | 00:33 | |
*** tristan has joined #buildstream | 00:34 | |
*** finnb has joined #buildstream | 00:34 | |
*** dbuch[m] has joined #buildstream | 00:34 | |
*** jward has joined #buildstream | 00:34 | |
*** segfault1[m] has joined #buildstream | 00:34 | |
*** Trevio[m] has joined #buildstream | 00:34 | |
*** krichter[m] has joined #buildstream | 00:34 | |
*** Demos[m] has joined #buildstream | 00:34 | |
*** persia has joined #buildstream | 00:34 | |
*** tchaik[m] has joined #buildstream | 00:34 | |
*** WSalmon has joined #buildstream | 00:34 | |
*** jjardon has joined #buildstream | 00:34 | |
*** pro[m] has joined #buildstream | 00:34 | |
*** cgm[m] has joined #buildstream | 00:34 | |
*** skullone[m] has joined #buildstream | 00:34 | |
*** awacheux[m] has joined #buildstream | 00:34 | |
*** albfan[m] has joined #buildstream | 00:34 | |
*** kailueke[m] has joined #buildstream | 00:34 | |
*** DineshBhattarai[m] has joined #buildstream | 00:34 | |
*** asingh_[m] has joined #buildstream | 00:34 | |
*** connorshea[m] has joined #buildstream | 00:34 | |
*** jjardon[m] has joined #buildstream | 00:34 | |
*** theawless[m] has joined #buildstream | 00:34 | |
*** m_22[m] has joined #buildstream | 00:34 | |
*** mattiasb has joined #buildstream | 00:34 | |
*** rafaelff1[m] has joined #buildstream | 00:34 | |
*** SamThursfield[m] has joined #buildstream | 00:34 | |
*** dylan-m has joined #buildstream | 00:34 | |
*** tlater[m] has joined #buildstream | 00:34 | |
*** reuben640[m] has joined #buildstream | 00:34 | |
*** abderrahim[m] has joined #buildstream | 00:34 | |
*** ironfoot has joined #buildstream | 00:34 | |
*** flatmush has joined #buildstream | 00:34 | |
*** benbrown has joined #buildstream | 00:34 | |
*** jswagner has joined #buildstream | 00:34 | |
*** irc.acc.umu.se sets mode: +oo jjardon ironfoot | 00:34 | |
*** walterve[m][m] has joined #buildstream | 00:43 | |
*** doras has joined #buildstream | 00:47 | |
*** mohan43u has quit IRC | 00:54 | |
*** mohan43u has joined #buildstream | 00:54 | |
*** tristan has quit IRC | 03:00 | |
*** traveltissues has quit IRC | 05:36 | |
*** traveltissues has joined #buildstream | 05:36 | |
*** seanborg_ has joined #buildstream | 07:18 | |
*** jude has joined #buildstream | 07:25 | |
WSalmon | juergbi, tristan did you work out what was up with the ci for this? fdsdk are rather keen to get this in https://gitlab.com/BuildStream/buildstream/pipelines/142887188 | 07:29 |
---|---|---|
*** rdale has joined #buildstream | 07:31 | |
juergbi | WSalmon: no, but I can merge it manually for now as the error is not related to the branch | 07:32 |
juergbi | WSalmon: done | 07:36 |
juergbi | we still need to look into the issue to fix future pipelines | 07:37 |
WSalmon | thanks juergbi | 07:38 |
*** tristan has joined #buildstream | 07:39 | |
*** benschubert has joined #buildstream | 07:41 | |
*** tpollard has joined #buildstream | 08:07 | |
benschubert | tristan, juergbi : concerning the plugin through junction, we were saying we should keep tar. However, I guess we don't want to keep 'downloadablefilesource' in core right? | 08:19 |
juergbi | well, we can't really have tar without that, can we? | 08:20 |
juergbi | or do you mean as a public API? | 08:21 |
benschubert | as a public API | 08:21 |
WSalmon | benschubert, but would we then have it twice? | 08:22 |
benschubert | yeah it would mean either: | 08:22 |
benschubert | - We do a 'stripped' tar approach and move the current tar plugin to elsewhere | 08:22 |
benschubert | - We reimplement everything needed from downloadablefilesource in the tar plugin / duplicate it between buildstream + somewhere else | 08:22 |
benschubert | - Else ? | 08:22 |
juergbi | hm, I don't think we can actually strip all that much | 08:23 |
*** ChanServ sets mode: +o tristan | 08:24 | |
tristan | Is there an issue with keeping DownloadableFileSource in core and making it public ? | 08:24 |
juergbi | so if we need the code in core anyway, I tend towards putting downloadablefilesource as public API in the core | 08:24 |
tristan | We want to avoid that because... we want downloadable file source to evolve separately and possibly more frequently ? | 08:24 |
benschubert | tristan: we would need to release a new buildstream for a bug in it :) | 08:24 |
benschubert | yeah, on the other hand I don't think that would evolve much... | 08:24 |
juergbi | benschubert: we'd have to do the same if we copied it into internal tar... | 08:24 |
benschubert | fair point | 08:25 |
juergbi | the actual public API surface is also fairly small, afaict | 08:25 |
benschubert | should we go fo ra public 'downloadablefilesource' in Core? | 08:25 |
juergbi | I think so. a possible alternative I see is to provide the corresponding functionality more as a library instead of as a Source subclass | 08:25 |
tristan | I don't think I have a problem with this, i.e. when it comes to fixing bugs, we don't need a minor point feature release for that | 08:26 |
benschubert | that's true | 08:26 |
benschubert | ok let's go for this. Should we keep it at buildstream's root? (buildstream.downloadablefilesource?) Or should we provide a "buildstream.mixins" package that contains helpers for plugin implementers? | 08:27 |
benschubert | (I quite like the second one as it's more explicit) | 08:28 |
tristan | Is there any kind of special feature of "mixins" ? I've heard this word before | 08:28 |
tristan | benschubert, in whichever case, I think it should reside at the same level as BuildElement and ScriptElement | 08:29 |
tristan | Whichever place provides abstract classes | 08:29 |
benschubert | tristan: "mixins" is usually a class that you can inherit from to get more features, | 08:30 |
benschubert | Ah forgot about those two | 08:30 |
benschubert | mixins might not be the right term then for all of this | 08:30 |
tristan | benschubert, in fact, I would probably be partial to having it at the same level as Plugin, Element and Source as well | 08:32 |
tristan | Which is currently the root, doesn't really have to be, but it seems to me that you just have a choice of what to derive from (Element, Source, DownloadableFileSource, BuildElement, ...) | 08:32 |
benschubert | yeah, and actually we tell implementers to only import from "buildstream" (so "from buildstram import BuildElement") | 08:32 |
benschubert | ah yeah good point | 08:33 |
benschubert | ok, so let's just make 'DownloadableFileSource' importable from BuildStream and remove the `_`? Then remove it from bst-plugins-experimental along with the duplicated tar plugin? | 08:33 |
tristan | I do like keeping this at the root, the only issue I could see is that it might feel crowded if we ever opened up the python API to something other than plugins | 08:33 |
*** phoenix has joined #buildstream | 08:34 | |
benschubert | right, this seems though that it should be a ML thread :'D | 08:34 |
tristan | Every time we think about that, I keep feeling resistant to opening up further API, it does keep coming up though | 08:34 |
benschubert | I think we'll need to be very careful, but there seem to be a need | 08:35 |
tristan | Right now we have 2 python surfaces which are buildstream.testing and buildstream | 08:35 |
benschubert | in which case, should we move all plugins in `buildstream.plugins` ? | 08:35 |
benschubert | *all plugin-related classes | 08:35 |
benschubert | that's yet another breaking change :/ | 08:35 |
tristan | If we keep everything in buildstream and later open up further application API surfaces (Stream etc), then the worst that happens is that we have the plugins at the same place | 08:36 |
benschubert | But might free us in the long term if we end up opening stuff? | 08:36 |
tristan | which honestly, doesnt look bad to me | 08:36 |
benschubert | that's also true | 08:36 |
tristan | What makes "testing" special IMO, is that it could technically be distributed separately | 08:36 |
benschubert | and buildstream/buildstream.testing is quite a clear separation | 08:36 |
tristan | Right, if we had an application facing API, it would *still* need the Plugin/Source/Element APIs to do useful things | 08:37 |
benschubert | correct | 08:37 |
benschubert | ok so let's import it in `buildstream` directly. I can make that change. Is that worth a ML thread? alongside with keeping `tar` in core? | 08:38 |
tristan | I think there is a suitable long standing issue appropriate for that | 08:38 |
tristan | it's been forever requested to be public | 08:38 |
tristan | but I wasn't around for intermediate discussions | 08:39 |
tristan | https://gitlab.com/BuildStream/buildstream/-/issues/610 | 08:40 |
tristan | Old MR: https://gitlab.com/BuildStream/buildstream/-/merge_requests/743 | 08:40 |
benschubert | and with regards to the `tar` plugin? | 08:40 |
benschubert | ML or not? | 08:40 |
juergbi | benschubert: you've mentioned this already in your thread | 08:41 |
tristan | benschubert, I guess it's better to summarize in a post | 08:41 |
tristan | It was also mentioned in the 2.0 planning thread | 08:41 |
benschubert | ok I'll add a last reply to my thread with it in it then :) | 08:41 |
tristan | Whatever, I don't think it's a huge deal but if anyone thinks otherwise go ahead and post :) | 08:41 |
benschubert | juergbi: yeah, however it was quite "hidden" so never sure :D | 08:42 |
tristan | juergbi, regarding error reporting for conflicting projects, I think we need either one of two things: (A) A dictionary of all loaders shared between all the loaders, to error out as we go, or (B) A loader walk at the end of the load process | 08:43 |
tristan | Probably (A) is more performant and better | 08:44 |
tristan | We don't have global shared data otherwise during the load do we ? | 08:44 |
*** santi has joined #buildstream | 08:52 | |
cphang | juergbi we're hitting a nasty bug in https://gitlab.com/celduin/infrastructure/celduin-infra/-/issues/144 where we're getting gRPC errors only on push jobs. Starting from https://gitlab.com/BuildStream/buildstream/-/blob/master/src/buildstream/_artifactcache.py#L515 I think I've localised it to failure at a UploadMissingBlobs call. Given the low | 08:58 |
cphang | level nature of the gRPC error (I think it's coming from https://github.com/grpc/grpc/blob/master/include/grpcpp/impl/codegen/method_handler_impl.h#L44) is it possible there's an unhandled gRPC exception occurring in buildbox? | 08:58 |
benschubert | juergbi: any reason we do a `readlink` there: https://gitlab.com/BuildStream/buildstream/-/blob/master/src/buildstream/testing/_utils/site.py#L77 ? If fails if you simply "install" buildbox-run-bubblewrap as "buildbox-run" instead of symlinking it | 09:14 |
benschubert | thus skipping all sandbox tests :) | 09:14 |
*** seanborg__ has joined #buildstream | 09:15 | |
*** seanborg_ has quit IRC | 09:16 | |
*** phoenix has quit IRC | 09:27 | |
juergbi | cphang: hm, haven't seen this before. odd that buildbox complains about batch download during push (upload) | 09:30 |
juergbi | benschubert: such that the test suite knows which implementation it is | 09:30 |
juergbi | there are a few xfails that are specific to userchroot | 09:30 |
juergbi | not ideal but at least it's only in the test suite | 09:30 |
juergbi | do you have a better suggestion? | 09:31 |
juergbi | maybe the output of --capabilities would be sufficient, at least for some tests | 09:31 |
juergbi | probably have to check xfail by xfail | 09:31 |
cphang | juergbi yeh. we don't think that one causes the other, particularly as we can see this failure mode occurring whether the batch download errors in buildbox occur or not. | 09:31 |
benschubert | juergbi: ah ok fair point. Should he error out instead if buildbox-run is not a symlink? That would be more explicit | 09:33 |
juergbi | benschubert: I'd certainly be fine with that | 09:34 |
benschubert | ok I'll do this :) | 09:35 |
benschubert | juergbi: also, do we get any logs for the FUSE stage child? It just dies with a stdruntime_exception thrown at https://gitlab.com/BuildGrid/buildbox/buildbox-common/-/blob/master/buildbox-common/buildboxcommon/buildboxcommon_client.cpp#L736 ? | 09:35 |
benschubert | when trying to run a test | 09:35 |
juergbi | benschubert: I don't think casd redirects fuse stdout/stderr, so it should be in buildbox-casd logs | 09:36 |
juergbi | we should the situation at least for common issues. e.g., fusermount3 missing might be a common issue | 09:37 |
juergbi | *should improve | 09:37 |
benschubert | and I didn't have it, guess that is my issue :D | 09:38 |
tristan | juergbi, I should have it ready soon, have to fix some tests and possibly clarify some areas, note that the junctions.py::test_build_of_same_junction_used_twice() test would go away with my first MR | 09:49 |
tristan | because it becomes unsupported, and then we discuss how to explicitly support it on the ML, right ? | 09:50 |
juergbi | tristan: I still think we should support a simple way of allowing the same project to be loaded twice right away. we can postpone the private/isolated discussion but I don't like merging an incomplete solution that might completely block some users | 09:54 |
*** tristan has quit IRC | 09:54 | |
*** seanborg__ has quit IRC | 09:55 | |
*** seanborg__ has joined #buildstream | 09:55 | |
*** tristan has joined #buildstream | 10:37 | |
*** ChanServ sets mode: +o tristan | 10:37 | |
tristan | juergbi, went to eat and came back | 10:38 |
tristan | ok so, if you want to solve the ability of loading the same junction twice with (possibly) differing configurations within the same pipeline, before landing the MR... that's fine with me | 10:39 |
tristan | I guess we'll need to figure out how it works | 10:39 |
tristan | while the thought process around reworking the loader is maddening, the code changes to get the first part working are surprisingly small | 10:40 |
tristan | (i.e.: I'm not bothered at all by rebases and postponing a bit) | 10:40 |
juergbi | ok, sounds good. and I assume the code changes for allowing the same project twice should also be fairly simple, it's more about deciding on exact syntax/semantics | 10:41 |
tristan | I don't know, I suspect lines of code to be small(ish) depending on the ultimate route | 10:44 |
tristan | as I recall, you wanted privateness or I preferred the word 'isolation' of projects | 10:44 |
tristan | I also like this approach better because it doesnt force any knowledge on reverse dependency projects | 10:45 |
tristan | but it requires some checks we need to find out (A) whether preventing runtime dependencies from propagating through to reverse dependency projects is the only danger... and (B) how to do the check for (A) if that is indeed enough | 10:45 |
tristan | I suppose that wouldn't cost much code but might be hard to get performant | 10:46 |
tristan | juergbi, anyway I'll prepare my branch with that test removed and we can talk more then | 10:46 |
juergbi | tristan: I think long term we want two approaches: 1) allow the top-level to whitelist a junction to prevent the new error 2) allow intermediate projects to mark junctions as private/isolated | 10:48 |
juergbi | (2) may be a larger discussion but I'd say we can postpone that if we at least have (1), which may be simpler | 10:49 |
tristan | I'm don't think I agree, but I might :) | 10:49 |
tristan | juergbi, from my point of view, (1) is a superior API, and if we don't have real justification to add (2) then I would prefer that (2) never exists | 10:50 |
tristan | Although it's probably true that (2) is easier to implement, it seems to me (at least right now), that the world would be a better place if only (1) existed | 10:51 |
juergbi | are you mixing up (1) and (2)? I'm confused | 10:51 |
tristan | Oh damn, yes I was | 10:51 |
tristan | completely in all three lines :) | 10:52 |
juergbi | ok, at least you're consistent :) | 10:52 |
tristan | heh | 10:52 |
juergbi | ok, that's an interesting point | 10:52 |
juergbi | I agree that if (2) can cover all use cases, we shouldn't even implement (1) | 10:52 |
juergbi | I thought there might be use cases where (2) doesn't work but maybe that's not the case | 10:53 |
tristan | Yeah, that's what we're not sure about... but I think it amounts to this | 10:53 |
juergbi | so I guess we at least have to partially think about the whole thing right away | 10:53 |
tristan | If you are in a situation where you might have needed to whitelist an error, it means you might be in a case where we force you to create an additional project in between in order to encapsulate/isolate your redundant subproject | 10:53 |
tristan | it might mean that | 10:53 |
juergbi | best starting point would probably be use cases | 10:54 |
tristan | indeed | 10:54 |
tristan | juergbi, both use cases, and really identifying what is the problematic case | 10:55 |
tristan | I'm thinking it's a case where an element has (indirect) runtime dependencies to two different versions of the same element | 10:55 |
juergbi | yes, I think that's the biggest problematic case | 11:01 |
tristan | It could be that we just blatently allow multiple junctions to the same project, and only error out on that (element related) error condition, too | 11:03 |
juergbi | almost duplicate build-only dependencies for different elements (e.g., two different versions of the compiler) may also be considered an issue in some contexts, and may be acceptable or even needed in other contexts | 11:03 |
tpollard | +1 juergbi | 11:03 |
tristan | then again | 11:03 |
tristan | one interesting thing is that having two different instances of the same element as dependencies, does not mean it will result in a problematic case, or a file overlap | 11:04 |
tristan | For instance, I might have tooling which I use to create images that is based on fdsdk, and I might almost never rev that tooling | 11:05 |
tristan | but I might use those artifacts to build bootable images of newer fdsdk snapshots | 11:05 |
juergbi | or you might even have a script element or similar where you install certain dependency trees in different locations in the sandbox | 11:08 |
juergbi | not causing any conflicts | 11:08 |
tristan | ... in that case of course, I would only have build dependencies, but perhaps I am building a system with multiple sysroots | 11:08 |
tristan | exactly | 11:08 |
tristan | another possibly API would be that by default, it's always an error; but projects which have a need could disable the error and deal with overlap errors if they occur later on | 11:10 |
tristan | not that pretty though | 11:11 |
tristan | note that in these cases though; if we were to have a "isolated" junction feature; we could still force such elements to not be propagated forward to reverse dependency projects, and make that illegal | 11:13 |
tristan | we could have an error message saying that "You are indirectly depending on isolated elements" and hinting that the project you are depending on elements from, should provide a `compose` or such element for you to depend on instead | 11:14 |
* tristan finds string division to be an interesting python feature | 11:16 | |
tristan | tmpdir / "sub-project" | 11:16 |
*** cs-shadow has joined #buildstream | 11:27 | |
*** rdale has quit IRC | 11:37 | |
cs-shadow | while running tests on bst-plugins-experimental, I am repeatedly getting error from buildbox-fuse about `Error staging "..." into "": "std::runtime_error exception thrown at [buildboxcasd_fusestager.cpp:128], errMsg = "The FUSE stager child process unexpectedly died with exit code 1"` | 11:54 |
cs-shadow | any ideas what might be causing this? | 11:54 |
cs-shadow | full gist at https://gitlab.com/snippets/1974543 | 11:54 |
cs-shadow | I also wonder if BuildStream needs to catch these errors and tell users what they can do | 11:55 |
coldtom | is there a nice way i can get verbose buildbox-casd logs when used in bst? | 11:55 |
*** rdale has joined #buildstream | 11:57 | |
tristan | oh no | 12:00 |
tristan | ah, never mind :) | 12:03 |
cs-shadow | I thought we pass our log levels to casd | 12:04 |
cs-shadow | so `--debug` should help? | 12:04 |
cs-shadow | @coldtom: ^ | 12:04 |
benschubert | cs-shadow: do you have 'fusermount3' ? I had that just before because of this :) | 12:05 |
cs-shadow | benschubert: seems like that was it, thanks! | 12:17 |
cs-shadow | also, I'd appreciate reviews on https://gitlab.com/BuildStream/bst-plugins-experimental/-/merge_requests/106 so that we can unbreak the plugins release on pypi | 12:18 |
tristan | juergbi, I've put up the first iteration here: https://gitlab.com/BuildStream/buildstream/-/merge_requests/1901 | 12:32 |
tristan | I don't know if it will pass all tests but the normal `tox -e py37` is passing | 12:32 |
tristan | There are a couple of open ended questions which remain around the loader (I think I can remove some weird code that appears to still be handling obsolete stuff) | 12:33 |
tristan | Anyway it's a start | 12:34 |
juergbi | tristan: great, thanks | 12:35 |
juergbi | I'd expect the _loaders variable to no longer be used | 12:35 |
juergbi | as there the key was the junction name, which should now be irrelevant | 12:36 |
tristan | juergbi, I thought that was an optimization for _get_loader(), when parsing more than one dependency which depends on something across the same junction | 12:36 |
tristan | it looks still relevant for that :-S | 12:37 |
tristan | index of loader by local junction name | 12:37 |
tristan | I removed a piece of code which sets `_loaders[filename] = None`, and I have a deadcode error about CONFLICTING_JUNCITON in there which I want to understand fully before safely removing | 12:38 |
juergbi | ah, right, for local lookup it's still necessary | 12:39 |
coldtom | that worked, ta cs-shadow | 12:53 |
*** seanborg_ has joined #buildstream | 13:26 | |
*** seanborg__ has quit IRC | 13:26 | |
*** seanborg_ has quit IRC | 13:32 | |
*** seanborg_ has joined #buildstream | 13:32 | |
WSalmon | juergbi, why do some jobs take seconds to fail to push because its already in cache and others take about a minute to come to the same concultion, https://gitlab.com/freedesktop-sdk/freedesktop-sdk/-/jobs/542596926#L1502 i would have thought they would all A) be fast, and B) not try to push back the the server they just got it from... | 13:51 |
WSalmon | skip B in this case | 13:52 |
coldtom | at a guess i assume the FindMissingBlobs calls take longer for bigger artifacts | 13:54 |
juergbi | yes, I assume that's the reason for the difference | 13:54 |
juergbi | or actually, we have to distinguish between artifact data and artifact proto | 13:55 |
juergbi | "already has artifact ... cached" means that it already had an artifact proto with that cache key | 13:55 |
juergbi | however, it always said "Pushed data" above, so at some blobs were apparently missing | 13:56 |
WSalmon | coldtom, is the cache full? | 13:56 |
WSalmon | i would not expect it to get removed between the push and pull | 13:56 |
juergbi | I think there is also a logic error. we should always push our proto if we pushed the blobs | 13:57 |
juergbi | otherwise the server may still have a proto from a previous build with incomplete blobs, and that proto would never get replaced | 13:57 |
WSalmon | but i dont think it should be pushing anything only seeing that the artifact is already there? | 13:57 |
WSalmon | that too | 13:57 |
juergbi | a long time ago we used to guarantee that if the artifact ref was on the server, the artifact blobs are also still available | 13:58 |
juergbi | in that scenario checking whether the artifact is on the server was very cheap | 13:58 |
juergbi | however, that's no longer the case with blob-based cache expiry etc. | 13:58 |
juergbi | so we may need to be a bit smarter about when we can skip push early on | 13:59 |
juergbi | it's not trivial as e.g. we don't check whether an artifact is complete on the remote when we pull | 14:00 |
juergbi | the Remote Asset API should guarantee this again, though, iirc. should revisit when moving to that | 14:00 |
cphang | juergbi referential integrity is tricky. I consider that a server side responsibility. E.g Buildbarn have a completenesschecker to handle this between Action Cache/CAS https://github.com/buildbarn/bb-storage/blob/master/pkg/blobstore/completenesschecking/completeness_checking_blob_access.go | 14:04 |
cphang | Doing the same thing for the artifact cache is a goal at https://gitlab.com/celduin/infrastructure/celduin-infra/-/issues/16 | 14:06 |
juergbi | yes, with Remote Asset API we should get that | 14:06 |
cphang | yep :) | 14:07 |
coldtom | i'd be surprised if the cache is full, perhaps bst logs the "Pushing blobs" when it calls FindMissing | 14:07 |
juergbi | and I don't think there should be a bb-artifact-cache | 14:07 |
juergbi | as the plan is to move away from that protocol | 14:07 |
cphang | juergbi it's an intermediate step. unless you think there's a viable remote-asset client side implementation in a few weeks? | 14:08 |
juergbi | I'm planning to work on initial parts in a couple of weeks or so. it will take a while until it's complete, though | 14:08 |
juergbi | can't give you an ETA | 14:09 |
cphang | juergbi np, once BuildStream move over to the remote asset API then of course I expect Buildbarn type solutions to move over to that. | 14:10 |
cphang | I think there were disucssions on this in #buildbarn a week or two ago. | 14:10 |
*** seanborg_ has quit IRC | 16:00 | |
*** pointswaves has joined #buildstream | 16:49 | |
*** tpollard has quit IRC | 17:00 | |
*** santi has quit IRC | 17:12 | |
*** santi has joined #buildstream | 17:13 | |
*** rdale has quit IRC | 17:48 | |
*** phoenix has joined #buildstream | 19:03 | |
pointswaves | Im pretty confused why even this is notworking https://paste.gnome.org/pjyezik6b benschubert juergbi | 19:09 |
benschubert | pointswaves: page does not exist? | 19:43 |
*** cs-shadow has quit IRC | 19:57 | |
pointswaves | benschubert, https://paste.gnome.org/pmtx7qsbk the default on gnome is only 30min i have been caught out once or twice forgetting to change it | 20:35 |
*** jude has quit IRC | 20:42 | |
pointswaves | https://paste.gnome.org/popey7v6e | 21:03 |
pointswaves | https://pastebin.com/YKuTjBsV | 21:06 |
*** pointswaves has quit IRC | 21:38 | |
*** phoenix has quit IRC | 23:05 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!