IRC logs for #buildstream for Tuesday, 2018-05-15

*** jennis has joined #buildstream00:14
*** jennis_ has joined #buildstream00:14
*** tristan has joined #buildstream02:24
tristangitlab now has a terms of service !02:31
*** Prince781 has joined #buildstream03:19
*** tristan has quit IRC03:47
*** jennis_ has quit IRC04:36
*** jennis has quit IRC04:36
*** Prince781 has quit IRC04:44
*** ernestask has joined #buildstream06:05
paulsherwoodeek... what does that mean?06:07
*** tristan has joined #buildstream06:14
paulsherwoodtristan: is there something in the terms of service you're concerned about?07:24
tristannot really, I just had to click through a "There is now a terms of service" button :)07:27
tristanI did read through it horizontally out of curiousity, they are fairly transparent when compared with evil social media and human harvesting corps :)07:28
*** toscalix has joined #buildstream07:32
paulsherwood:-)07:35
gitlab-br-botbuildstream: merge request (juerg/googlecas->master: WIP: Google CAS-based artifact cache) #337 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/33707:52
*** jonathanmaw has joined #buildstream08:41
*** dominic has joined #buildstream09:09
*** noisecell has joined #buildstream09:12
gitlab-br-botbuildstream: merge request (jmac/rename_size_request->master: WIP: Logging widgets: Rename 'size_request' to 'prepare') #382 changed state ("closed"): https://gitlab.com/BuildStream/buildstream/merge_requests/38209:17
*** bochecha_ has joined #buildstream09:21
finnI'm sourcing a file for autotools. The file has a directory structure such that the file I actually want to make is inside 'doc/amhello'09:26
finnIs it possible to prepend to autogen?09:26
finnI've tried this:09:27
finnhttps://pastebin.com/9kwCNTdg09:27
finnBut I think prepend only works for lists, not dicts09:27
finnbase-amhello.bst [line 9 column 4]: List cannot overwrite value at: autotools.yaml [line 5 column 11]09:28
*** aday has joined #buildstream09:33
tristanfinn, correct, dicts are entirely unordered things10:07
finnI can't think of a nice solution to grab the auto tools hello world example. I've uploaded as far as I got to the examples repo with comments10:10
tristanfinn, does setting the 'command-subdir' to 'doc/amhello' not work for your purposes ? or you *really* only want to override the autogen part of `configure-commands` ?10:10
finnWill try now10:10
tristancommand-subdir applies to all commands10:11
tristanfinn, do you have a link to the specific example I can view, also ?10:12
gitlab-br-botbuildstream: merge request (juerg/googlecas->master: WIP: Google CAS-based artifact cache) #337 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/33710:13
finnthanks, command-subdir worked :)10:19
finnYou mentioned that yesterday too but I don't think I'd quite understood10:19
tlaterHuh, I didn't know about command-subdir10:22
tlaterThat's neat :)10:22
finnI'll upload in a mo. The example now builds the official auto make example and installs10:24
tristantlater, it's admittedly tricky to find, because it's implemented (and thus documented) in the shared BuildElement base class10:24
finn^^ I hadn't quite understood that doc yesterday10:24
tlatertristan: On the expiry branch btw...10:27
tlaterThe last thing I have to think about is the triple quota10:28
tristantlater, you mean triple threshold10:28
tlaterErr, yeah10:28
tlaterObviously we'd rather have the user set a single threshold and calculate the other two from that10:29
tlaterThe question is - what should the margins be?10:29
tristantlater, As I recall, we had decided that there was only going to be one threshold in your branch, as it intends to land way in advance of CAS10:29
tlaterDidn't we need it anyway to ensure that we don't spend all our time in cache cleanup?10:29
tristani.e. "for now" you are not allowing cache cleanup to happen concurrently with ongoing builds (and potential cache commits)10:29
tristantwo thresholds is enough for that10:30
tlaterWe have to wait for all current jobs to finish, because otherwise we may remove in-progress ostree commits10:30
tristanyou need a third only when you allow a cleanup to happen with ongoing commits at the same time, which we should ideally be doing, but are not going to before CAS10:30
tlaterAlright, I'll just take that at face value for now... In either case, finding a good margin here is difficult, because it depends on the average artifact size10:32
tristantlater, the lower threshold is the ideal smallest target cache size, the higher threshold is the one where you wait for builds to complete and queue up a cleanup job before triggering more builds10:32
tristanyes, that is a separate problem10:32
tlaterI think the optimal solution is the maximum expected artifact size * the number of possible concurrent build jobs10:33
tristani.e. how we derive these values from user configuration is a separate activity10:33
tlaterBecause that way we'd end up perfectly hitting the spot where we filled up the cache every time10:33
gitlab-br-botbuildstream: merge request (issue-21_Caching_build_trees->master: WIP: Issue 21 caching build trees) #372 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/37210:33
tristantlater, that is exactly wrong10:33
tlaterThe question is just, what is the maximum expected artifact size?10:33
tristantlater, i.e. if you mean that you want *that* distance between the two thresholds, it's untrue10:34
*** bethw has joined #buildstream10:34
tristantlater, you ideally want a cache cleanup to trigger at most once per build session10:35
tristani.e. you never *want* it to happen, it's an evil necessity that slows things down, it's an annoyance10:35
tristantlater, I might consider starting with something like 50% for the lower threshold10:36
tristanand then that 50% *anyway* has a lower limit, it is limited by not being allowed to delete artifacts which are "pinned" by the current build plan10:36
tristanso that 50% might become 75% with a small quota10:37
*** bochecha_ has quit IRC10:37
tlaterSo, on this low threshold, what do we do? Do we launch a cleanup job, just to make sure we go down to a reasonable amount at the end of the build?10:38
tristanok, so to think about that... take the user configured value and throw it away10:38
tristannow we're thinking of how the machine works10:39
tristanyou have two values10:39
tristanWhen you reach the upper value, you stop queuing build tasks10:39
tristanThen you launch the cleanup task once build tasks are complete10:39
tristanor "At the earliest free opportunity"10:39
tristanAccounting for the failed builds and such which will cause interactive prompts etc, when the user "continues"... and no builds are running...10:40
tristanThen you launch your cleanup task10:40
tristantlater, When the cleanup task runs... it removes artifacts until you reach the lower threshold (or stops earlier if removing more artifacts would cause "pinned" artifacts to be removed)10:41
tristanIdeally, when the cleanup task completes, the artifact cache size is <= lower threshold10:41
tlaterHm, this would mean that we'd always launch a cleanup job once the lower threshold has been reached10:41
tristanNo10:42
tristanOnce the higher threshold is reached, you stop queueing jobs and launch a cleanup10:42
tristanThat single cleanup job, removes alllllllll the artifacts all the way down to the lower threshold10:42
tristanleaving you with tons of space in your quota10:42
tlatertristan: I mean that we'd launch a cleanup job once per pipeline once we ever reach the lower threshold10:43
tristanso you likely dont have to see the cleanup job more than once in a run10:43
tlaterBecause you'll inevitably run out of elements to build eventually10:43
tristanWhat do you mean ?10:43
tlaterSay the user continued out of a pipeline with failed build elements, and this launched a cleanup job10:44
tlaterWe are now just under the lower threshold10:44
tristanYou only ever schedule the cleanup job once you reach the UPPER threshold10:44
tlaterThe next time the user launches a pipeline that creates any artifact, we will almost certainly launch a cleanup job at the end again10:44
tristanI'm not following why you think that10:44
tlaterI thought you'd launch a cleanup task at the lower threshold, but that makes sense, yeah10:45
tlaterThe lower threshold is essentially just the value we want to reduce the cache size to - it doesn't trigger any action from buildstream10:45
tristanI think I've been saying the same thing... but capitalization of 'upper' works ?10:45
tristanexactly10:45
tlaterHaha, sorry, I thought you'd said the opposite earlier10:45
tristanthose are the meanings of the two thresholds10:45
tlaterProbably misread10:45
tlaterYep, that makes sense10:46
tristanSo I think 50% is a decent start for the lower, "target size" threshold10:46
tlaterIf we add a third threshold, *that* threshold would be when we launch the job10:46
tlaterAnd that threshold should then be close to max artifact size * number of jobs10:46
tlaterI presume?10:46
tristantlater, exactly... and it would reside somewhere around 75% lets say10:46
tristantlater, but it would not stop queueing jobs, jobs only stop getting queued at the upper limit10:47
tlaterYep, I understand, ta10:47
tlaterI think 50% is a pretty good value10:47
tristanThe way that we arrive at that number is rather orthogonal to the initial implementation, but I think it's a decent number too :)10:48
tlaterWe can always optimize in the future if we get more data...10:48
tlaterBuildstream telemetrics?10:48
tlater;p10:48
tristanCapital S10:48
tristantoscalix, has started a trend I'm afraid10:48
tlaterAck, apologies, typo10:49
* tlater will hopefully get to implementing this over the next few days and finally get this MR done10:49
tristantlater, sec...10:50
toscalixBased on my experience with OpenSUSE, openSUSE, openSuSe, openSuse, Opensuse..... I predict capitalization will be the biggest source of headaches for tristan in the next 5 years10:52
tristantlater, https://gitlab.com/BuildStream/buildstream/merge_requests/421/diffs#note_72664689 <-- is this going to be relevant local cache ?10:52
tristantoscalix, uh oh... now I *KNOW* who is responsible for having started mis-capitalization in the openSUSE camp10:53
tristantoscalix, dont do it ! ;-)10:53
tlatertristan: No, we worked quite closely together. That MR actually contains some of the API I added for local expiry.10:54
tristantlater, that comment, you know... calculating the size is a *very intense* operation10:54
tlaterOh, it didn't link me to the comment :|10:55
tlaterHang10:55
tristantlater, I am presuming we are caching some number with the assistance of ostree here, and that we know how much we've added to the repo when we make a commit10:55
tlatertristan: Oh, yeah, I made very sure to be careful about using that function10:55
tristanso we can easily keep track of what we're spending without spamming the hardware relentlessly10:55
tlaterWe should use it once at most for the full cache - unfortunately ostree can't report cache sizes |:10:56
tristantlater, once ... "in a session" ?10:56
tlaterYes, unless I start writing the session size to a file10:56
tlaterEven then the accuracy would be abysmal10:57
tlaterBut I make sure a separate thread calculates it, at least10:57
tristanSo how do we know how much the cache grew after a commit ?10:57
tlaterWe guess based on the artifact's directory size10:57
tlaterAnd only calculate when *that* grows too large10:57
tristanSo we assume that builds are 100% non-reproducible, and that ostree has no deduplication ?10:58
tlaterNo, we assume that that gives us a nice upper limit on the actual size10:58
tristanI guess that is kind of okay-ish10:58
tlaterWe then figure out the actual cache size if that upper limit becomes too large10:59
tristanSo we fake it by calculating the size of directories we add, assuming no deduplication10:59
tristanand when we near the quota, we scratch the disks ?10:59
tlaterYes, that's what the MR currently does...11:00
tlaterI don't really have any other way to deal with it though11:00
tlaterBecause assuming no deduplication means that the guess is quite inaccurate11:00
tristantlater, note that thread or no thread... this is still very concerning... try running `du -hs` in a directory that is around 60GB in size full of fairly small files (preferably after a clean boot)11:00
tristanit will bog down the machine for sure, I/O wait is going to suffer a lot11:01
tristanMaybe we should reverse this completely, however if CAS is designed for this, quotas are a preferable configuration API11:04
tlaterI dislike this, too... Does CAS support a nicer way of determining repository size?11:04
tristanIf this is going to be a short lived experience of "OK everybody, lets wait for 20min to determine the size of the cache before the builds continue !", then maybe it's alright11:05
* tlater doesn't think 20 minutes is alright for small builds, but this should be very rare for those at least11:06
jmactlater: I don't think it does, unfortunately.11:07
tlaterConsidering that, it probably isn't a bad idea to keep a file containing the cache size between runs.11:07
tristantlater, I am making a bit of a presumption based on juergbi being aware of the designs we've been working on, but we aught make sure that juergbi has considered this in his CAS implementation11:07
tristanif we're going to use this approach at all11:07
tristantlater, the opposite approach is to not have a quota, but to instead cleanup artifacts based on remaining space on the partition11:08
tristanwhich is always snappy, but a shitty configuration API11:09
tlaterWe *could* make that the default11:09
juergbiThe current CAS branch doesn't have any special features for this, so it will work the same way (once purge is implemented)11:09
tristanA user wants to say "Use at most 50GB", not "Always leave 50GB space on my disk"11:09
juergbiHowever, it should be easier to implement a repository size cache as we control the whole code11:10
tlaterBut some users say "I don't care" and for those we could leave a few GB on their disk :)11:10
juergbi(it's generally not trivial due to parallel operations, though)11:10
tlaterHow do DBs solve this?11:11
juergbinon-cache DBs can't implicitly expire stuff11:11
juergbia DB-like approach for handling the parallel size updates would definitely be possible11:12
tristanYou would likely need an active process for CAS11:12
tristanwouldnt be nice and simple like SQLite11:12
juergbino, something SQLite-like (or an actual SQLite DB) should work as well11:12
tristanwould be full blown complex IPC level stuff11:13
tristanoh ?11:13
tristanperhaps, I never actually trusted parallel writes and file locking with SQLite in production11:13
juergbiSQLite is well-tested, I don't expect any issues if we are ok depending on it11:14
juergbiwith WAL (write ahead logging) it uses a SHM area for coordination11:15
tristanI've done a lot of many-readers/one-writer with SQLite11:15
juergbibut doesn't require a daemon11:15
tristanWith WAL indeed11:15
tristanWAL gives you a huge performance boost with many-readers-one-writer, you either read the last commit or the next, but never block11:16
tristanwhether that means you trust multiple processes to not futz things up, is another story11:16
tristan(at write time)11:16
tristanbut perhaps it has gotten better in recent years11:16
juergbiyes, in general you can't use it if you don't trust the other processes11:16
juergbibut that shouldn't be an issue at least for the local artifact cache case11:16
juergbiand hopefully also not for the server side11:17
*** aday has quit IRC11:18
juergbia small SHM area and a cross-process mutex might even be sufficient11:18
tristanabout 17 seconds to `du -hs` my cache, which is nice and small at 32gigs11:18
juergbithere will likely be a huge difference depending on how hot the kernel dentry cache is with regard to the directory tree11:19
juergbitwice in a row the second one will likely be very fast11:19
tristanyes11:19
*** aday has joined #buildstream11:20
tristanSo, likely this mean a really obnoxious delay at the beginning of `bst build` and `bst pull`11:21
tristanwhen it's the first time in a while11:21
tristanand cold cache11:21
tristanafter that, the later checks wont be as costly11:22
tlaterThat's presumably only on Linux though11:22
jmacOn the server we'd just record size updates in bst-artifact-receive, no?11:22
tlaterOther platforms might do worse11:22
tristanjmac, on the server side we've been stalling to implement a "stop gap" for too long, and it will be entirely obsoleted by CAS11:22
tristanbut with `bst-artifact-receive` the problem is certainly not simpler11:23
tristanassume you have approximately 8 artifacts being simultaneously uploaded at all times11:23
juergbitristan: not really obsoleted, no. it's just a 'touch' on access and then use the same code11:23
tristanjuergbi, we're not getting rid of the OSTree remote cache ??11:24
tristanI thought we are going CAS all the way11:24
tristanno more ?11:24
juergbiyes, we are11:24
jmactristan: It's only an atomic update of one integer11:24
juergbibut the basic approach of artifact expiry can be very similar11:24
tristanRight11:24
juergbione nice difference being that we can use access time instead of mtime, but the rest stays pretty much the same11:25
tristanjuergbi, I see, so you dont anticipate that we synchronize cleanups on the server ?11:25
juergbiwe probably should, yes, and not just on the server, also on the client side11:26
juergbibut I see this as orthogonal to OSTree vs. CAS11:26
tristanjmac, it is most certainly not as simple as just an atomic update of one integer, right now you have hundreds of hashed objects in an artifact, you have to know which ones already exist in the cache to know how much you grow the cache, and 8 separate uploads are happening at once11:28
tristanall of that has to get the right number at the end of the day11:28
juergbiit should definitely be easier with all code being under our control11:29
tristanSo I suppose it could be done while renaming files from the tempdir to the cache, and observing if it did exist, those operations then need to be made atomic too11:29
tristanmaybe I'm off base but, I'm much less worried about a coordination process crashing, than the side effects of stale locks left behind by independent processes11:30
Nexuswhy does workspace reset contain a lot of cloned code from workspace open instead of just calling workspace open like it used to?11:31
juergbiwith O_EXCL, lock files can be handled properly11:31
juergbishould be overall much easier than a daemon11:31
tristanNexus, loading is an expensive recursive process, you dont want to do it again and again and again11:32
juergbion the server we already have a daemon, so we could implement it like that, however, if we need a daemon-less approach on the client side, it could make sense to do the same on the server11:32
tlaterNexus: If you really wanted to you could make that bit of code a separate function so you can call it without loading :)11:32
jmactristan: Each file we upload to the cache either overwrites a file or adds a new file; either way we know how much we've increased storage by. You'd need to stat the existing object, but that's all11:33
Nexustristan: it makes it rather difficult for me to do the cached build tree logic, as i'm basically writing duplicated code11:33
Nexuswhat is currently there only works if you're using sources, not a cached build11:33
tristanNexus, not to mention, you want to reuse Stream._fetch() on the *whole batch* if and when you need it, and if you have a failure, you want it to happen *before* you start modifying workspaces11:33
tristanNexus, in other words, IMO the old code was quite broken in practice while attempting to reuse code11:34
jmacIf you have two artifacts being uploaded at once which overwrite the same files, then you have a potential ordering problemn11:34
jmacI think that problem disappears with the CAS, though11:34
tristanjmac, right, we upload to a temp cache so that we dont write partial stuff directly, currently they are a safe atomic os.rename(), but (os.rename() + increment size by os.lstat().st_size) has to become atomic afaics11:36
tristanI guess it's not *immensely* complicated11:36
tristanbut it's just ingrained in me, to avoid flock() at high costs11:36
jmacI'm not sure you need file-level locking; I was imagining a basic service which used multiprocessing's locks11:38
tristanjmac, right, that's kind of the idea I was spinning above, or rather than using threading locks; once you have a live entity server you can start to serialize things11:38
tristanjmac, I *feel* like it's safer, but juergbi prefers locks to a daemon, and I'm honestly not sure I'm prepared to argue either way right now11:39
tristanmy feelings on it being safer probably come from the days of the linux 2.6 kernel11:40
tristanthe world has changed, and people are using SQLite with multiple writing processes to the same DB, so I am out of date ;-)11:41
tristantlater, ok well - all of this started with a "thing" which is not yet solved for you11:42
Nexustristan: ok, well i will need some way of storing whether or not a cached build tree was used originally when opening the worksapce11:43
tristantlater, I'll say this then: It seems to cost not so HUGE amount of time, likely ~30 seconds at startup for a 50GB cache, and then later requests the way you have arranged them seem to be low cost on linux11:43
tristantlater, I think that we can run with this for now, but we really may want to seriously reverse this after11:44
tristantlater, it may be worth raising this on the ML for more eyes, too, but I dont want this detail to block landing the cleanup tasks11:44
tristanNexus, How so ?11:45
tlatertristan: Alright, I'll also make sure I give the ostree doc another dive when I get the chance to.11:45
tristanNexus, when you reset a workspace, it's quite similar to closing and opening it, what changes here ?11:46
Nexustristan: because from what i can see, it looks like the old workspace is deleted, a new one created, and then opened using sources11:47
tristanNexus, when it comes time to call Element._open_workspace(), you have a clean area to start with11:47
gitlab-br-botbuildstream: merge request (valentindavid/359_cross_junction_elements->master: WIP: Allow names for cross junction elements) #454 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/45411:47
tristanNexus, so I would suppose that Element._open_workspace() can at that time know whether a cached build tree is available as usual11:47
tristanNexus, in other words, I'm not sure how any of this highlevel Stream() stuff makes a difference to the underlying mechanics, even11:48
Nexustristan: because element_open_workspace() was only going to get called if they didn't want to use a cached build11:49
tristanNexus, note that by the time we hit the element, in that loop in Stream.workspace_reset(), it is already determined whether tracking was done (potentially changing the cache key), so when Element._open_workspace() asks for the cache key, it should have the new updated one11:49
tristanNexus, So then your business logic is too high up in the food chain, when we ask an Element to open a workspace, we are not asking the element to specifically just stage the sources, we are just asking it to open a workspace11:50
tristanNexus, if there is a preference coming from on high at the CLI level, we should pass that preference through as an argument11:50
*** aday has quit IRC11:51
Nexustristan: if we wait until the element to decide to use a cached build tree, then we're doing a lot of un-needed work beforehand in the stream.workspace_open function11:52
*** aday has joined #buildstream11:52
tristanNexus, I dont understand... Why ? What exactly is the preference, and what time have we wasted ?11:52
Nexusyou'd be wasting a lot of time on finding all of the sources which would then not be used11:53
tristanI.e. we're still going re-open the workspace in a reset, whether or not we prefer to use a cached build or not, right ?11:53
tristanAh, because fetch ?11:53
tristanNexus, now you raise an interesting problem :)11:54
Nexus:)11:54
tristanSo, A.) It's important to run self._fetch() on everything before moving on, especially because it might track, and the operations might fail11:55
*** aday has quit IRC11:55
tristanBut B.) It's unneeded to self._fetch() in the case that you want to use cached build11:55
*** aday_ has joined #buildstream11:56
Nexuswhich is why i had the "do you want to use the cache" logic so high up11:56
tristanI think (A) is important because the operation can become destructive11:56
*** aday_ is now known as aday11:56
tristanNexus, so I'm not sure exactly what you had before, but I presume you were basically "falling back" to fetching the sources only if a cached build is unavailable ?11:57
Nexustristan: unavailable or explicitly not wanted11:58
tristanNexus, I think you can still factor that into the current code, of course you cannot do that if you need to `--track`11:58
tristanit will just become a bit more wordy11:58
tristanNexus, I would still delegate the real work to Element._open_workspace() though, and ask it specifically for a cached build first but have it either return something meaningful or raise a meaningful error in the case one was unavailable11:59
Nexuswell i think we need to make the `--track` and `--cached-build` flags mutually exclusive, because you can't have both12:00
tristanThen again, the (A) problem comes back12:00
tristanwhich quite sucks12:00
tristanWell, you surely could have both12:01
tristanNexus, only after you track do you know the new cache key12:01
tristanAnd then depending on that, you probably try another scheduler run to attempt to pull the artifact, etc12:01
Nexusassuming you don't want a previous cache?12:01
Nexustristan: i think all of the tracking/fetching logic needs to be moved into the element function, because having that stuff done in both workspace_open and workspace_reset seem wasteful to me12:02
*** jennis has joined #buildstream12:02
*** jennis_ has joined #buildstream12:02
tristanNexus, If you run `bst workspace reset --track foo.bst`, then you *certainly* are interested in whatever coresponds to the *new* cache key12:02
tristanNexus, I dont see any room for ambiguity there12:02
Nexusok, in the case of reset then yes12:03
Nexusbut not in open12:03
tristanNexus, no, you cannot put that stuff into the element, that stuff runs schedulers, which creates cyclic ownership weirdness12:03
Nexus:/12:03
tristanWhen opening a workspace, if you `bst workspace open --track foo.bst`, AGAIN you can only possibly be interested in the *tracked* cache key12:04
tristanThe problem with grouped workspace commands is basically that you want to succeed or fail as a group12:04
tristanOr get damn close to it as possible12:05
tristanit's pretty ugly to have it fail but have also modified stuff12:05
tristanNexus, so, first of all it seems to me A.) you wont want to use the shared Stream._fetch() function, or at least you may need to modify it to allow track() without fetch()  B.) You probably want to share code with workspace_open() and workspace_reset()  C.) workspace_open() is going to be another target of multiple element operations12:17
tristanNexus, the code for this logic seems to involve tries with uncertain success and then fallbacks to other things, in order to safely share this... one option might be to do the work in a temp dir first before committing the results12:18
tristanNexus, i.e. in this light, the worst thing which can happen at "commit" time is that we fail to remove the original directory and replace it with the newly created open workspace12:19
tristanThis approach would seem to allow for sharing the complex logic of "doing the minimal" for both workspace_open() and workspace_reset(), while minimizing the risk of committing a partially successful result12:20
Nexusok, this is going to need a bit of planning then12:21
finnCompetition time.12:27
finnCan anyone think of a better name for BuildFarm?12:28
tlaterDam12:28
tristanHah12:28
tristanfinn, We need a plumbing analogy :)12:29
tristanwhat is a network of pipelines ?12:29
finnMarioWorld12:29
finnArboretum12:30
skullmanInterchange12:31
skullmanSewageTreatmentPlant12:31
jmacPipemania12:32
finnWaterSupply12:32
tristan'Sewage' on it's own is cute haha12:32
tlaterHmm, where else do pipes go if not sewage?12:33
tlaterPool?12:33
tlaterBuildPool? Meh12:33
finnBayou12:33
skullmanWateringHole12:33
tlaterRiver? Many streams?12:34
tlaterSea/Ocean?12:34
tristanIrrigation !12:34
skullmanHydraulics12:34
tlaterHeh, irrigation is neat, but sounds too technical12:35
skullmanHydroponics12:35
finnAlthough irrigation is used in farming12:35
skullmanhm, a bit close to NixOS' Hydra.12:36
skullmansewage.buildstream.gnome.org would clearly be the artifact server containing the failed builds12:37
finnConfluence12:37
skullmanEffluence12:37
tristanConduit ?12:38
skullmanooh, Conduit's a good one12:38
finnConflux12:38
* tlater likes conduit, too12:38
finnit's a pipe which channels water12:40
finnplenty of puns in there12:40
finnConduit then?12:40
tristanIt's got a ring to it, I wouldnt throw in the towel just yet12:41
tristan(might clog up the conduit)12:41
dominicahh, never knew conduit had another meaning12:42
tlaterdominic: "Another"?12:44
tristanI like it though, seems like there are opportunities for naming of accompanying tooling too, like "Plunger" or "Faucet"12:45
dominicas in, I only knew of an electrical conduit12:45
tlaterOh, right12:45
dominicso was confused about the suggestion at first12:45
tristan"Just extract your build results with Faucet...", or "Did you try freeing up some resources with Plunger ?"12:46
* tlater just renamed his cleanupjob to plunger12:46
jmacISTR one of the common complaints about Baserock was the silly names for components12:47
paulsherwood+112:47
paulsherwoodthere were others, of course :)12:48
aidena manifold is sort of a network of pipes12:49
toscalixvalentind: the feature you are working on correspond with this feature request? https://gitlab.com/BuildStream/buildstream/issues/32812:49
valentindNo, #330.12:50
toscalixthanks12:50
tristanjmac, paulsherwood ... that was a good idea for naming, just... the analogies were SO unpopular12:50
tristanwhat is the field even called again ? minerology or smth ?12:50
tristansome ology12:50
paulsherwoodtristan: morphology. and stratum. whoosh...12:51
toscalixfinn: BuildFarm - BuildGrid12:51
tristanyeah, not that ology, morphology is a word in the field of ... somethingology12:51
finnBuildGrid is simple12:52
finnlike it12:52
jmac+1 for BuildGrid12:52
paulsherwood+112:52
paulsherwood(assuming it actually gives a sense of what it is)12:52
tlaterAt that point we could just go for BuildFram12:52
tlater*Farm12:52
tristanI liked Conduit better, but not against BuildGrid12:52
tlaterI don't think Grid is very obvious12:52
tristanI thought the reason to not be BuildFarm is...  obviously that is google branding for a similar thing12:53
tristanit really makes no sense to be BuildFarm12:53
finnno12:53
finnI don't want to have to keep saying Bazel / Uber BuildFarm and BuildStream BuildFarm12:54
tristanfinn, I agree, and think we should have a distinctive name12:54
tristaneven if it's said to have compatible components which adhere to some BuildFarm standards12:54
tristanit's better to distinguish12:55
tristanThen again... is there even actually such a component ?12:55
tristanFrom what I understand, we are already recycling the ultraboring name CAS12:56
tristanbecause we were too lazy to come up with a name for it12:56
tristanfinn, is there even a "thing" to be named ?12:56
finna project repo12:56
tristanno executable or service ?12:57
tristanor library even ?12:57
*** jennis_ has quit IRC12:58
*** jennis has quit IRC12:58
tlaterfinn: Won't that project produce a daemon for a server to run? That was my impression of this, at least.12:58
*** jennis has joined #buildstream13:01
*** jennis_ has joined #buildstream13:01
tristanin any case, you can have a grid of pipes, but you dont grow a pipelines on your farm, so I think grid is already better than farm (if there is indeed a thing to name, if there is not a tangible thing to name, that's just boring :))13:01
finnIt will be a background service for a server to run, so I think that's yes to your question - though I'm no Computer Scientist13:02
* tristan goes to harvest a fried chicken at the fried chicken grid13:03
skullmangrid works analogously to the power grid, as something you connect to, to get work done13:04
*** tristan has quit IRC13:06
*** jennis_ has quit IRC13:08
*** jennis has quit IRC13:08
*** jennis has joined #buildstream13:10
*** jennis_ has joined #buildstream13:10
tlaterHm, I wonder if the various BuildElement variables are documented somewhere13:11
tlaterbindir and co13:11
*** tiagogomes has joined #buildstream13:12
*** tiago has quit IRC13:13
*** jennis has quit IRC13:14
*** jennis_ has quit IRC13:14
*** solid_black has joined #buildstream13:35
*** solid_black has quit IRC13:39
*** tristan has joined #buildstream13:40
*** tiagogomes has quit IRC13:50
*** bethw has quit IRC14:24
*** bethw has joined #buildstream14:30
*** tiagogomes has joined #buildstream14:48
gitlab-br-botbuildstream: merge request (jmac/virtual_directories->master: WIP: Abstract directory class and filesystem-backed implementation) #445 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/44515:38
gitlab-br-botbuildstream: merge request (valentindavid/359_cross_junction_elements->master: WIP: Allow names for cross junction elements) #454 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/45416:02
*** dominic has quit IRC16:04
*** toscalix has quit IRC16:08
*** bethw has quit IRC16:20
*** jonathanmaw has quit IRC17:03
*** bochecha_ has joined #buildstream17:19
*** ernestask has quit IRC18:05
*** tristan has quit IRC19:28
gitlab-br-botbuildstream: merge request (valentindavid/359_cross_junction_elements->master: WIP: Allow names for cross junction elements) #454 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/45420:00
*** aday has quit IRC20:51
*** aday has joined #buildstream20:52
*** aday has quit IRC21:38
*** bochecha_ has quit IRC22:39
*** bochecha_ has joined #buildstream22:41
*** bochecha_ has quit IRC23:11
gitlab-br-botbuildstream: merge request (chandan/393-fix-workspace-no-reference->master: WIP: element.py: Fix consistency of workspaced elements when ref is missing) #462 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/46223:49

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!