IRC logs for #buildstream for Friday, 2018-04-27

*** Prince781 has joined #buildstream00:15
*** Prince781 has quit IRC01:22
*** Prince781 has joined #buildstream02:58
*** Prince781 has quit IRC04:13
*** tristan has joined #buildstream04:19
tristanjuergbi, since you are dancing in CAS land, it's like you to please review my comment here: https://gitlab.com/BuildStream/buildstream/issues/76#note_7055837106:21
juergbitristan: yes, CAS works pretty much the same in this regard06:23
juergbitristan: also note that with FUSE we don't have to extract at all anymore, though06:23
tristanjuergbi, Right but... we still need to inspect metadata from the artifacts in various places before a build commences07:08
tristanI think this is irrelevant though, or I hope so07:09
juergbitristan: yes but that should similarly be possible with the virtual filesystem API07:09
tristanIf we have OSTree, we should just make extraction happen at cache insertion time07:09
juergbialthough we haven't planned any functionality to open a file for reading from there yet07:09
tristanAnd if we have CAS, we should have a way to read the metadata from an artifact, no matter which way it is done07:09
juergbiyes, just wanted to note it to avoid spending lots of time to change things with OSTree for minimal benefits and then throw it away when switching to CAS07:10
tristanRight, that is sort of the goal of my comment anyway07:10
juergbi(it's probably pretty simple but still)07:10
tristaneven if we *did* stick with OSTree, its still not desirable to try to force artifact caches to support partial extraction07:11
tristanYeah, extract at insert time should be very simple, and a performance gain on it's own07:11
juergbitristan: I don't understand your comment about partial extraction07:13
juergbidirect access of metadata in CAS instead of extraction makes sense to me07:14
juergbi(it would be no extraction at all, not partial extraction, to be precise)07:14
juergbi(here CAS used as generic term)07:14
tristanjuergbi, I'm looking at #75 separately from CAS, some of the preceding comments there indicate that we would want to get information about the artifact at an earlier stage, without completely extracting the artifact07:15
tristanwhich is something avoidable I think07:15
tristanThere is an interesting problem still to be solve there in the case that you do run into a failed artifact during a build07:16
juergbiah right, I agree we should not go towards extract jobs or anything like that07:17
tristanIf you have a PullQueue enabled, you can already know that the artifact you downloaded was a failed one before getting to building it's reverse dependencies07:17
*** bochecha_ has joined #buildstream07:17
tristanI'll comment again cause I have an idea07:18
tristanjuergbi, new comment about that, but not related to CAS anymore :)07:28
tristanAnyway... lots of issue comments today... issue rampage07:28
tristanjuergbi, I'm getting a bit more concerned about caching of build trees, in light of big builds like webkit07:29
tristan:-/07:29
tristanjuergbi, I think that *even if* we are able to exclude the `.git` directory for such a thing, we're *still* going to end up with ~6GB artifacts07:30
tristanMy built WebKit directory is 5.8GB, excluding the resulting webkit output07:31
tristanand this is from a tarball07:31
tristanThis is actually a really big deal07:31
gitlab-br-botbuildstream: merge request (valentindavid/update_mirror->master: WIP: Valentindavid/update mirror) #440 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/44007:31
tristanjuergbi, Do you think we should make this optional in some way ?07:31
tristanI'm a bit scared of this, its a crap ton of data nobody is going to want to download07:32
tristanMaybe we need a fragmented artifact cache for this ?07:32
tristanpoor Nexus07:32
tristanhe keeps getting the curveballs07:33
tristanMaybe I should write the list on this topic07:33
*** toscalix has joined #buildstream07:36
*** toscalix has quit IRC07:38
*** toscalix has joined #buildstream07:39
gitlab-br-botbuildstream: merge request (valentindavid/update_mirror->master: WIP: Valentindavid/update mirror) #440 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/44007:43
*** tristan has quit IRC07:51
*** tristan has joined #buildstream08:37
juergbitristan: I was voicing that concern as well and think it should definitely be optional. That said, deduplication should help a lot here with regards to multiple builds, as long as object files are also reproducible08:37
tristanjuergbi, yeah, but it should be either optional or, on demand perhaps08:38
juergbinot sure how we could do on demand08:39
tristanjuergbi, artifact-cache split is an idea08:41
tristannot super pretty, but totally plausible08:41
tristanthere are only a few cases where we actually *want* the cached build tree, so why not just downloaded when we want it ?08:42
tristanIf we split the artifact cache in two, locally and remotely, we can have that08:42
tristans/downloaded/download08:42
juergbiah, not making the caching part optional, just the downloading08:43
juergbiwith CAS it should be possible to do this even without splitting08:43
juergbii.e., download parts of an artifact08:44
juergbiwe anyway want this for remote execution where we often don't even need the build output08:44
*** dominic has joined #buildstream08:44
juergbicomplete split would prevent effective deduplication (especially relevant when sources are stored in the cache as well)08:45
tristanactually, even with ostree we could potentially do that08:45
tristani.e. we could have 2 keys to address it08:45
juergbiyes, at least on the low level it's definitely possible08:45
tristan ${project}/${element}/${key} and build/${project}/${element}/${key}08:46
tristanjust separate commits08:46
juergbiyes08:46
*** jonathanmaw has joined #buildstream08:50
tristanjuergbi, if we do this, do you think we should split logs and metadata as well for consistency ?08:56
juergbinot sure. in CAS I would handle them all the same way but that would be via subdirectories, there would be no need for the separate refs08:58
*** bethw has joined #buildstream08:58
Nexustristan: i saw poor Nexus, what's happening now‽08:58
juergbiI don't like the idea of moving metadata out of the object but for consistency it could make sense08:58
tlaterHm, now that our fedora image isn't used for testing anymore, but just `bst-here`, should we consider only installing stable buildstream releases on it?08:58
juergbimaybe only from 1.2 onward?08:59
tlaterWe could also add an :unstable tag, since bst-here is growing "change the image we'll use" functionality08:59
tristanjuergbi, for consistency and also having the ability to access metadata might allow us to do more interesting client side things with less overhead09:01
tlaterOh, apparently we haven't split off the testing image from the bst-here image09:02
* tlater thought that was what jjardon did a little while ago09:03
gitlab-br-botbuildstream: merge request (jennis/136-clean-remote-cache->master: WIP: jennis/136 clean remote cache) #421 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/42109:07
tristanNexus, just sent mail to the list :)09:20
Nexusyup, reading now09:20
tristanNexus, things are not going to work exactly as planned, it's hopefully not too huge of an obstacle; but I wanted to make a more public statement, hopefully this will help reduce stress and pressure directed at you :)09:21
Nexusmuch appreciated :)09:21
*** noisecell has quit IRC09:40
gitlab-br-botbuildstream: issue #316 ("Assertion error when building in non-strict mode") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/31609:47
*** aday has joined #buildstream09:52
jennistristan, FYI remote cache expiry is *mostly* ready. All I am doing now is trying to work around the unlikely case of an artifact being bigger than the actual cache quota itself (which is turning out to be a real PITA)09:54
*** noisecell has joined #buildstream10:06
* tlater considers writing to the ML about the structure of buildstream-docker-images10:20
tlaterThere's just about no strucutre to how we manage those images anymore10:20
tlaterAnd I think splitting up buildstream-fedora into a testing and runtime image is pretty important10:21
tristanjennis, I guess you can abort bst-artifact-receive and hit the cleanup path in that case ?10:39
tristanjennis, I mean... while it's in progress; as you may not be able to know until you fill up the quota10:39
tristanjennis, then again, you dont know if there are deduplicated similar ones10:40
tristanjennis, Ok well... *please* be forgiving and dont spend too much effort perfecting that case10:40
tristanjennis, it's acceptable to let it grow beyond the quota and think about it after10:41
tlatertristan: The problem is mostly that when you do try to close the pipe on the server end, the client complains about a broken pipe10:41
tristanjennis, this is A.) An edge case that is not worth the hassle perfecting anyway... B.) Some hopefully short lived code which will be replaced by CAS10:41
tristanHuh, I was sure we were handling hangups correctly10:42
* tlater was a bit confused too, perhaps someone else can shed some light there10:44
tlaterI suspect we send some other data *after* the tar stream completes10:44
tlaterAnd that the client doesn't realize that the pipe closed in that case10:44
jmacI didn't think we did send any more data after the tar stream...10:45
tristanOk so, maybe we should just define what "complain" is here, and handle that as an error ?10:45
tlaterjennis tried that, but it keeps complaining in various spots, so he'd have to litter everything in try/except clauses10:45
tristan:-/10:45
tristantlater, sounds like perhaps the try/except clauses were missing in the first place ?10:46
tlaterYeah, agreed, there's something fundamentally wrong here10:46
jmacAha, no, we do wait for a PushCommandType.done10:46
tristanI mean, remember that this code comes from a proof of concept toy for an upstream proposal to ostree10:46
tlaterjmac: Presumably that is something we define as a little protocl?10:47
tristanwe made it a bit more robust but it aint really fault tolerant so much10:47
tlaterYeah, fair enough, it's about to be obsoleted anyway10:47
tlaterBut it *does* make the edge case at hand painful to deal with10:47
jennisI'm implemented it so that if the artifact is larger than the quota, it won't start deleting already-present artifacts10:48
jennisSo tristan, I guess what you suggested about letting it grow beyond the quota and deal with it after could be a possibility10:49
jmactlater: Yes, it's defined just by the implementation in pushreceive.py; there's no explicit definition10:49
jmacBut AFAIK it's just a 5-byte message with one byte set to PushCommandType.done10:50
jennishowever, if the quota is >= 50% of the servers available diskspace, we'll be adding an artifact greater than this...10:50
tlaterjmac: That's what I figured from my fairly limited exploration... I also assume that this means the client expects the tarfile to upload without the pipe ever closing, which causes the various BrokenPipeErrors10:51
tristanDo we launch it in a subprocess ?10:51
jmactlater: Yeah, there's no feedback - the server can't tell the client to stop10:51
tristanI dont think we do anymore10:51
jennistristan, yes10:51
tristanwe do ?10:51
tristanSo... we're just chasing after some error messages which are produced by a failed push... in a subprocess ?10:52
tristanLike, do we really care how gracefully the subprocess exits ?10:52
jennishttps://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_artifactcache/pushreceive.py#L44110:52
tlaterYep, we can probably just ignore those exceptions there10:53
tristanohhh you mean the ssh command10:54
tlaterAh, nevermind10:54
tlaterI suppose we could ignore the exceptions in pushqueue?10:55
tristanwell, how painful can it be to handle the exceptions and abort the push ?10:55
* tristan a bit stumped, it's painful, it complains...10:55
gitlab-br-botbuildstream: merge request (jennis/136-clean-remote-cache->master: WIP: jennis/136 clean remote cache) #421 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/42110:56
tristanThis *is* all happening in a child task, though10:56
tristanI mean, handle it all and raise ArtifactError() for the push and we'll get a nice clean error message in the frontend10:56
tlaterjennis: Have you tried catching the exception in ostreecache.py instead of pushreceive.py?10:58
* tlater suspects that will give a nice, single spot to deal with10:58
tlater_push_to_remote() looks to be the place10:59
tristanRight, *try* to be careful about generalizing too much on the exception, if it's specifically a broken pipe / hangup exception that we catch there then that is nice11:01
tristan(because it's nice to keep unexpected exceptions reported as BUG with stacktrace)11:01
tristanBut dont try to distinguish between a general network failure broken pipe, and the remote cache having refused the artifact11:02
tristanThat is something we should have with CAS eventually, but not worthwhile here11:02
jennistristan, no, I will see where I can get with that11:03
tristanabderrahim[m], thanks for the report ! ... looks like more whackamole to do in Element._update_state()11:06
abderrahim[m]😄11:08
tlatertristan: Hrm, I don't like how the cleanup jobs fit inside this queue thing we cooked up - yes, it's just another job for a queue to launch, but it also needs special casing because we want to run it exclusively.11:11
tlaterThat is, when a cleanup job is launched we need to 1) wait for every current job to finish, 2) keep the scheduler from launching new jobs, 3) launch the cleanup job11:12
tlaterThis is so we avoid changing the cache contents while we're busy cleaning up11:12
tlaterActually, perhaps we could only stop launching build/pull jobs11:13
tlaterBut anything that could add objects to the cache must not run, because the cleanup job could accidentally remove the committed objects before they are assigned a ref11:13
tlaterIn either case, that means that the scheduler has to change what it's doing because we're running a cleanup job11:14
tlaterSo it can't just be another job we submit11:14
tlaterAlternatively we can make sure the artifactcache is locked, but we'd need to do so across threads, which I think would require a lockfile?11:15
tristantlater, dont forget about the 3 value threshold11:19
tristanwhich we discussed11:19
* tlater does recall that, but isn't sure how it solves the problem of cache cleanup needing to be atomic.11:20
jennisso basically there problem is with the PushCommandType, as jmac noticed earlier11:20
jennishttp://termbin.com/iyvo911:20
tristantlater, we launch the cleanup at the median, with a desire to bring it down to the first value, we only stop queueing more jobs when we reach the limit11:20
jennisSeems to be returning a mu for byte-order11:20
tristantlater, it must not be atomic, it must be launched with a decision of exactly what it's going to do, which artifacts it's going to cleanup... it's execution is finite right ?11:21
tlaterHrm, yeah, that works, but it still means that the cleanup job is special11:21
tristanof course it's special11:21
tristanIt is *totally* special and nothing like the event notifications11:21
tristanit needs business logic to caress it11:21
tristantlater, and it should not delete things which are deduplicated parts of ongoing concurrent caching of other things :)11:22
tristanthere is a big gotcha :)11:22
tlatertristan: Could you explain that with a bit more detail? I feel I'm missing another thing buildstream does here.11:23
tristantlater, it caches artifacts11:23
tristanwhile you are deleting them11:23
tristanotherwise, it has to be atomic11:23
tristanwhich sucks11:23
tlaterYep, and that's a big problem11:24
tlaterBecause ostree apparently only assigns a ref after the objects have been committed11:24
tristantlater, maybe we can solve that in CAS artifact caches only11:24
* tlater is afraid the cache deletion has to be atomic until we get CAS11:24
tristantlater, if the atomic thing happens not very often (large thresholds), then it wont be too annoying for now11:24
tlaterI'll concede cleanup jobs their snowflake status then.11:26
tlaterjennis: That mu may very well just be a glitch11:26
tlaterWhat's important is that the client should handle erroneous data gracefully, and not crash11:26
tlaterNote that this might already happen due to network flukes11:27
tlaterOf course, we can't know whether a network fluke or a too-large-artifact caused this11:27
tlaterBut that distinction doesn't matter enough that we should block a stopgap solution for cache expiry on it11:27
tlaterSo if you just handle that error by warning the user that the push failed, it's fine11:27
Nexustristan: In your email, what do you mean by "Downloading the build tree of an artifact must only ever be done on demand"?11:28
tlaterjennis: Though... Why is it trying to figure out a byte order there?11:29
tlaterDoes that exception happen during initialization? The byte order should be known by the time we have started uploading an artifact, no?11:29
tlaterAh, right, every message has a byte order attached... Even the done message, and if the pipe doesn't contain data, that byte order is mu, for some reason.11:31
tristanNexus, I mean on demand, in the few corner cases that we need it, not in normal operation11:32
tristanNexus, I think I went over that in the email too11:32
Nexusi'm a bit confused as to what you mean by "downloading the build tree"11:33
tristanNexus, for instance, I should have a choice when opening a workspace... Do I wanna wait and download 6GB of build tree so that I can have an incremental build of WebKit ?11:33
Nexusis this assuming a remote cache?11:33
tristanYes11:33
Nexusok11:33
tristanIf it's cached locally that's a different story indeed11:33
Nexuscurrently there is the option when opening a workspace to add the flag "--no-cache"11:33
Nexuswould you prefer that be the default?11:33
tristanThis is bigger than just that11:34
tristanCurrently you will cache the build trees and upload them unconditionally, that part is correct11:34
NexusConditional on your not telling buildstream not to, yes11:34
tristanCurrently people want to `bst build epiphany.bst`, they dont need cached build trees of everything to do it, but we're forcing the downloads on users, that's really bad.11:35
Nexusthen you could set the variable `cache-build-tree` to false in your project.conf11:35
tristantlater, I'm gonna go now... but dont think of doing crazy things like autosuspending ongoing tasks; wait for tasks to complete11:35
gitlab-br-botbuildstream: merge request (378-make-bst-here-more-flexible->master: Resolve "Make `bst-here` more flexible") #439 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/43911:36
tlatertristan: Don't worry, they'll gracefully end ;p11:36
tristanNexus, that wont stop you from downloading the entire cached artifact from the remote, where it resides11:36
Nexusis that a thing that currently exists seperate to my code?11:36
tristanNexus, I want to `bst build epiphany.bst`, I dont wanna download 20GB of stuff I dont need11:36
jennisSo I guess to be safe, I should return a flag if we've encountered a too large artifact and then force the byte order to be something reasonable11:36
Nexusbecause my code never uses the cache for a build11:37
tristanNexus, your code caches a build tree in the artifact yes ?11:37
tlaterNexus: `cache-build-tree` only stops you from *uploading* cached build trees, but if someone has already uploaded one, you'll have to download a massive artifact.11:37
tristanthat's kind of the plan11:37
Nexustlater: yes, during opening a workspace only11:37
tristanNah, a remote artifact cache should always be complete, you dont get something that depends on build configuration of the uploader11:38
tristanthat'll never happen11:38
tristanNexus, if the uploader uploaded everything (which it should)... the downloader currently *has* to download everything (which is horrible)11:38
gitlab-br-botbuildstream: merge request (valentindavid/update_mirror->master: WIP: Valentindavid/update mirror) #440 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/44011:42
Nexustristan: kk i just chatted with tlater. just to make sure i have this correct. Currently, when you build something, buildstream looks to see if it has that artifact. Once my code pushes up a build tree, that process will ALSO pull down a potentially huge build tree11:42
Nexuscorrect?11:42
*** tristan has quit IRC11:43
gitlab-br-botbuildstream: merge request (valentindavid/etag->master: Store etag along with cache.) #441 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/44111:43
jennisoh so it looks as if it just returns any random char for the byteorder if the pipe contains no data11:44
gitlab-br-botbuildstream: merge request (valentindavid/update_mirror->master: WIP: Valentindavid/update mirror) #440 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/44011:44
jennisDo we think forcing the byteorder is the right think to do here?11:45
jennisSeems like it could potentially disguise other problems11:45
tlaterjennis: That code is down to networking details11:45
tlaterIf the byteorder fails to parse, we should handle the error gracefully (i.e., abort pushing, but give the user a warning)11:45
tlaterI don't think you want to force the byte order11:46
jennisSo you're saying, instead of raise the PushException, abort pushing...?11:48
jennishttps://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_artifactcache/pushreceive.py#L8211:48
tlaterWhat I'm saying is, keep that code as-is, but wherever we call it, we need to handle a PushException and warn the user instead of crashing11:49
tlaterBecause you may receive an invalid byte order - that's fine11:49
tlaterWhat's not fine is crashing when we do11:49
jmacIs the byte order actually used for anything other than the size field?11:50
tlaterjmac: I'm pretty sure this is to avoid problems with little/big endian11:50
tlaterBecause that's not guaranteed when sending something over the network, iirc11:50
tlaterBut juergbi should know more about that11:50
jmacThat would generally be the use for a byte order field11:50
tlaterOh, you mean we don't actually act upon it?11:51
jmacI'm not sure11:54
tlaterjennis: One thing to consider is if randomly receiving a correct byte order would break something11:54
jmacThe GLib.Variant thing is obscuring thing a bit11:54
jmacYeah, it looks like we have the capability to byteswap arguments but all the arguments we use are strings, bytes or an integer that's always 011:58
jmacNo big deal anyway11:59
*** tristan has joined #buildstream12:02
* jennis is stumped12:05
jennisI'll see if I get any ideas after lunch :p12:06
Nexustristan: So in what i have written, the build tree is currently being saved into a new subdir of the artifact. Could we make a minor change to buildstreams current system, to make it exclude the build tree dir i added?12:06
jjardontlater: what is the bst-here image?12:37
tlaterjjardon: It's the image we'd actually like to be used exclusively for this script: https://gitlab.com/BuildStream/buildstream/blob/master/contrib/bst-here12:37
tlaterI.e., buildstream-fedora12:37
tlaterbuildstream-fedora should be split into testsuite/fedora-27 and buildstream12:38
tlaterThe one being used exclusively for tests, and the other exclusively for that script12:38
* tlater is considering writing to the ML on this topic, because I'm not sure how aware people are of what the images repo should look like... Or what various discussions here have thought it should look like.12:39
cs_shadowtlater: I was also thinking of updating the README of http://gitlab.com/buildstream/buildstream-docker-images/ to document what the different images are and when they should be used12:56
cs_shadowif you wish to start a ML thread, that might be even better to gather various opinions about this12:56
jjardoncs_shadow: +112:57
tlaterYep, that's a great idea12:58
tlaterI'll send an email later today :)12:58
* jennis is unsure how to deal with these exceptions13:32
jennisI've jumped down a rabbit hole handling them, until I've hit a general exception, and blindly handling that will fail the build and provide no useful output13:33
jennisuseful debugging13:33
jennis*13:33
jennisI'm also dubious to add too many try/except clauses13:36
jennis^ for such an unlikely use-case13:36
gitlab-br-botbuildstream: merge request (jennis/136-clean-remote-cache->master: WIP: jennis/136 clean remote cache) #421 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/42114:04
*** Prince781 has joined #buildstream14:09
tlaterjennis: You should be able to handle all or almost all of those exceptions in a single try/except clause14:14
jennisyes, i've just done some rework, still had to use a bare-except though14:15
jennisWhich has resulted in the log being pretty useless14:15
tlaterErk14:15
tlaterAre you sure you need a bare except? I really doubt that14:16
jennishttps://gitlab.com/BuildStream/buildstream/merge_requests/421/diffs#1a607748038846ed9a81a1ae319de4df329ce8e6_649_70114:17
*** Prince781 has quit IRC14:17
tlaterjennis: Are you aware that you can catch multiple exceptions?14:18
tlaterYes you are...14:19
tlaterWhy are you catching a bare except there?14:19
jennistlater, yes, but it's because the log contained a "raise Exception"14:20
tlaterYou should be able to change that :)14:21
jennisoh nvm, I haven't managed to obtain that again14:21
jennisI had a return statement I forgot to remove earlier14:22
*** Prince781 has joined #buildstream14:22
jennisbut now back to a log output with no debugging help14:22
gitlab-br-botbuildstream: merge request (jjardon/debian-9->master: .gitlab-ci.yml: Run test in current Debian stable (stretch)) #425 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/42514:42
gitlab-br-botbuildstream: issue #371 ("Run tests in Debian stable") changed state ("closed") https://gitlab.com/BuildStream/buildstream/issues/37114:42
gitlab-br-botbuildstream: merge request (jennis/136-clean-remote-cache->master: WIP: jennis/136 clean remote cache) #421 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/42115:19
*** noisecell has quit IRC15:26
jmacI'm looking at _set_deterministic_user in utils.py. It uses the bst user's euid/egid, which doesn't seem very determinisitic to me.15:44
jmacI'm wondering if it has some other intended function.15:54
*** dominic has quit IRC16:19
tlaterDoes anyone happen to know how glib variants work in a bit more detail?16:23
tlaterSpecifically, ostree commit variants: https://lazka.github.io/pgi-docs/OSTree-1.0/classes/Repo.html#OSTree.Repo.load_commit16:23
* tlater is very confused as to what the object he gets returned contains...16:23
*** bethw has quit IRC16:42
juergbitlater: a (serialized) GVariant object is an immutable byte array that conforms to the GVariant format and individual parts of it can be retrieved using g_variant_get and co.16:43
juergbisee https://developer.gnome.org/glib/stable/glib-GVariant.html for the full documentation16:43
*** xjuan has joined #buildstream16:46
*** toscalix has quit IRC16:50
gitlab-br-botbuildstream: merge request (jennis/136-clean-remote-cache->master: WIP: jennis/136 clean remote cache) #421 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/42116:51
jennisI've handled the BrokenPipeError exception by simply forcing "pushed" to True, however this is now breaking the other expiry tests as the last element is not being pushed to the cache16:52
jennisWhich would only happen if the size of the tar_info object is greater than the quota16:53
jennisSo when we individually push three, 5 MB artifacts, by the time it comes to receiving the third one, is the tar_info object 15 MB?16:53
tlaterty juergbi16:53
tlaterjennis: What you might be looking at is slight inaccuracies16:55
tlaterDepending on how compressible your data is you might get wildly varying actual artifact sizes in the cache16:55
jennisWhere wildly varying is on a scale of MBs?16:56
tlaterYeah, could be, they can shrink quite a lot16:56
tlaterI don't think the tar_info object size will change...16:56
tlaterCan you inspect that?16:56
jennisNo, but will it represent the size of all compressed objects in the cache?16:57
tlaterNot afaik, I don't think all artifacts are pushed simultaneously, let alone in the same tar stream16:57
jennisThis is so bizarre16:57
tlaterjennis: A well-placed exception might help debug things ;)16:59
gitlab-br-botbuildstream: merge request (214-filter-workspacing->master: Make workspace commands on a filter element transparently pass through to their build-dependency) #317 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/31717:02
*** jennis has quit IRC17:04
*** jonathanmaw has quit IRC17:09
*** Prince781 has quit IRC17:49
*** xjuan has quit IRC20:04
*** mattis has joined #buildstream20:17
mattishi20:17
mattisKein OP20:18
*** mattis has left #buildstream20:18
*** mattis has joined #buildstream20:18
*** mattis is now known as NotBanForYou20:22
*** NotBanForYou has left #buildstream20:26
*** Prince781 has joined #buildstream20:29
*** xjuan has joined #buildstream20:47
*** xjuan has quit IRC20:56
*** cs_shadow has quit IRC21:16
*** aday has quit IRC21:50
*** bochecha_ has quit IRC21:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!