IRC logs for #buildstream for Friday, 2017-07-14

*** xjuan has quit IRC01:50
*** jonathanmaw has joined #buildstream07:51
*** jonathanmaw has quit IRC08:03
*** jonathanmaw has joined #buildstream08:03
*** tristan has joined #buildstream08:04
*** ChanServ sets mode: +o tristan08:05
*** tlater has joined #buildstream08:14
gitlab-br-botpush on buildstream@dual-cache-keys (by Jürg Billeter): 20 commits (last: context.py: Add strict_build_plan option) https://gitlab.com/BuildStream/buildstream/commit/7f4e2618fc091b44a43c6c5ac6f8fd54f8518fbc08:27
gitlab-br-botbuildstream: merge request (dual-cache-keys->master: Dual cache keys) #55 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/5508:27
juergbii had to fight against cyclic python imports, solved it by moving ArtifactError to exceptions.py. i assume that's ok08:30
tristanjuergbi, I had similar problem, I moved it there and back once08:33
tristanjuergbi, to be honest I'm hoping to do an "API sweep" before any stable release and remove as much as possible08:34
tristanfrom public surfaces08:34
tristanwhat I found in the ArtifactError case, was that it was importing Element08:34
tristanwhich was unneeded really08:35
tristanjuergbi, anyway it's not a huge deal for right now08:35
tristanmaybe all these exceptions should be public eventually anyway; I'm thinking in an eventual future I want to move all of the highlevel ops stuff out of _pipeline.py and put that somewhere else; maybe even in context.py or such08:37
tristanThe scheduler should be a detail of context.py08:38
tristanAnd the pipeline should really just be the loading of a fully resolved project08:38
tristan(this is thinking towards recursive pipelines; where I want to be able to "add a pipeline to a scheduler")08:38
tristanand the Pipeline I would like to be public to allow external projects to load and iterate over elements08:39
*** dabukalam has left #buildstream08:39
tristanbut anyway that's just my pipedream of pipelines, for immediate future; just private as much as possible; and location of ArtifactError is not hugely important08:40
juergbiyes, it's still private. i was a bit surprised how painful python can be in this regard08:50
tristanyeah in 3.5 circular imports are supported08:51
tristanwhich caused me to mess up before and introduce them08:51
tristanbut, it's all around saner to avoid them08:51
tristancircular imports tend to turn code into spaghetti :)08:52
juergbiyes, i like clear module structure but with internal functionality split across a couple of modules, you sometimes want python --no-strict ;)08:54
juergbii'm actually on python 3.6, doesn't change anything in this regard08:55
tristanoh strange, I have that somehow disabled by default with python 3.508:55
tristanmaybe they became stricter08:55
tristananyway we support 3.4, and if we have fuzz of internal functionality split across modules; maybe we should fix that anyway08:56
tristanI wonder if it was a mistake to start letting the artifact cache work out what refs to commit based on the element object in the first place08:56
tristanbut meh08:56
juergbiyes, i was actually thinking about the element/artifactcache boundary as well recently08:57
tristanit's not hugely important but I guess if it were a bit more explicit, it *might* (only maybe) make things more straightforward for implementing a second artifact cache08:57
juergbibut it's not public api, we can change things later as we see fit08:57
tristanindeed08:57
*** ssam2 has joined #buildstream09:11
* tristan has a variant conditional source09:16
tristanI need 0.36 branch of vala for GNOME SDK09:17
tristanAnd running `bst track --variant flatpak core-deps/vala.bst` works like a charm :)09:17
tristanjonathanmaw, so... as far as I know SIGTSTP is not handled whilst shelling into a failed build environment, we dont have handlers in place for that09:35
tristanjonathanmaw, but one thing important to note, this started happening when we changed the sandbox spawning for interactive mode; to not create a new session for the bwrap process09:36
tristanSo, if a child process is in the same session, it generally means you will receive the unix signals it receives too09:36
tristanHowever, changing it so that we run the interactive sandbox directly in the same session; caused the terminal to be functional09:37
jonathanmawah, ok09:38
tristanThe bug that happens with non-functional terminal, is well; shitty terminal first of all (no tab completion and such), and also ^C causes the shell to abort09:38
tristanSo, yeah it's possible that a SIGSTOP is being issued from bash itself09:38
tristanat exit time ?09:38
tristanWeird but possible ?09:38
tristanIf that's the case, then we may not have a workaround for this, and might be stuck implementing namespaced functional terminal "properly"09:39
tristanwhich is... a hassle09:39
tristanit means we need to have another process marshalling data to and from the namespaced child process09:39
tristanthat's more or less what I gathered from the kernel experts in OFTC irc, but I didnt take the time to understand it in detail09:40
jonathanmawok.09:41
jonathanmawand iiuc, https://gitlab.com/BuildStream/buildstream/commit/c14424ed1e8c457b4a3a886824c2885d0c82c2a4 is the commit where sandboxing stopped using a new session09:42
tristannot really, it's one of them09:43
tristanthat one does the signal handling also conditional on interactive mode09:43
tristanthe original one only did new_session09:43
tristanjonathanmaw, I feel like maybe this is too involved, it really is hard to solve - and might mean really understanding linux namespaces and how to setup a functional console in a namespaced process with everything unshared09:45
tristanmaybe I threw you a curve ball09:45
jonathanmawok, for the moment I'll try and find out where the SIGSTOP is coming from.09:46
tristanyeah that might help, I'm not even really sure how to trace that09:46
tristanmaybe strace -f will help with that, though09:46
tristan(does strace record signals though ?)09:46
jonathanmawtristan: I believe so09:46
tristanI think I suspected a TSTOP was coming from that process, and that's why I removed the _signals.suspendable() in interactive mode09:48
tristanbut then I left that out because anyway it makes more sense to not do that signals stuff (which is intended for the child tasks) directly in the parent session09:48
jonathanmawhmm, strace seems to break fuse in some way. I can confirm that strace is capable of picking up SIGSTOP, though.09:57
jonathanmawit says which signal the PID came from, and there's si_code in there as well, but I'm not sure what that part means.09:58
jonathanmawaha, man 2 sigaction seems to be informative10:00
tlaterI'm having trouble building the gnome-modulesets base system. It looks like it doesn't contain a /usr/bin directory...10:06
tristanhmmm10:06
tristanthat wouldnt make much sense10:07
tlaterI don't set split-rules anywhere, either10:07
tristansplit-rules should not effect that10:08
tristantlater, always assume that I need all the specific details10:09
tristanlog file can help, maybe10:09
tristantlater, but in anycase, once you get passed base-configure.bst you should have /usr/bin10:09
tristanit's a debian import10:09
tristanmaybe there was an error on the latest debian import ?10:09
tlaterAh, I don't get past base-configure10:10
tlaterBecause it's lacking ln -s10:10
tristanno10:10
tristanit's not lacking ln -s10:11
tristantlater, maybe there was an error on the last debian import, or the debian import you are using10:11
tristanbase-system.bst is the import element10:11
tristanthat downloads a debian image created by multistrap10:11
tristanI dont know what ref you have in there10:12
tristanbut I suspect it's a bad10:12
tristanone10:12
tlater`bst track` should solve that, shouldn't it?10:12
tristantlater, `bst track base/base-system.bst` should get you the latest ref10:13
tristanwhether the latest ref is broken or not, I dont know10:13
tristanI dont usually expect them to fail10:13
tristanbut, the other day there was a disk out of space condition10:13
tlaterI'll try that again, though it didn't work yesterday.10:13
tristantlater, you can also try 37fea071158573b62f186f03cdbc450ddefcc5c40ed622acb2d4eb8dd2eed48210:13
tristan`bst track base/base-system.bst` didnt work ?10:14
tlaterIt didn't solve the problem, it was functional10:14
* tristan doesnt understand10:15
tlaterI had the same build issue yesterday, and tried `bst track` just in case something was wrong with my ref - it didn't solve the build issue then.10:15
tlaterThough it did successfully update the ref10:17
tristantlater, try 37fea071158573b62f186f03cdbc450ddefcc5c40ed622acb2d4eb8dd2eed48210:17
tlaterI will in a second, waiting for it to download things :)10:17
tristanthis looks successful https://gnome7.codethink.co.uk/debootstrap-logs/debootstrap-ostree-2017-07-14-103716-amd64.txt :-/10:18
tristanyesterday fail: https://gnome7.codethink.co.uk/debootstrap-logs/debootstrap-ostree-2017-07-13-100406-amd64.txt10:19
tlaterHm, perhaps I just happened to get that twice somehow.10:19
tristanday before successful https://gnome7.codethink.co.uk/debootstrap-logs/debootstrap-ostree-2017-07-12-100401-amd64.txt10:19
tristan11th successful10:20
tristanyeah it could be10:20
tristanwe're probably ignoring the return value of multistrap in debootstrap-ostree scripts10:20
tristanthat could use a patch10:20
tristanssam2, maybe artifact push errors should not be fatal https://gitlab.com/baserock/definitions/-/jobs/22481024 ?10:23
tristanjust a thought10:23
tlaterYeah, this seems to have worked. The build is taking significantly longer at any rate. Thanks tristan :)10:25
ssam2tristan, oh I didn't notice that was the issue10:25
ssam2tristan, they should be highlighted better, at least!10:25
ssam2tristan, but in this case I do want it to fail if it can't push. maybe that should be an option10:26
ssam2FWIW I looked at adding a 'preflight' artifact cache check, but it turned out quite difficult10:26
ssam2probably needs a 'ping' option adding to bst-artifact-receive10:26
ssam2will open an issue10:27
tlater[c8dd6b30][ push:base/ninja.bst            ] WARNING Not pushing tainted artifact.10:36
tlater\o/10:36
juergbitristan: do you agree with my latest comment on the MR regarding push?10:40
tristanjuergbi, so...11:29
tristanjuergbi, I was looking at https://github.com/dbnicholson/ostree-push/blob/master/TODO11:30
tristanjuergbi, from what I understand, there is *some*, I'm not sure how much it covers, we should look deeper11:30
tristanjuergbi, specifically the third point11:31
tristanjuergbi, in our case; as I understand it at least; we push two separate refs which have all the same objects11:31
tristanthe second push should only push the ref11:31
tristanand indeed as I understand the code; we push everything at the tip of a commit through the tar streaming thing11:31
tristannot a subset of only what the server lacks11:32
juergbiyes but as i mentioned in my comment, the bulk of the code already supports pushing multiple refs at once (with identical or differing commits)11:32
tristanAh11:32
juergbii.e., using that to avoid the weak ref penalty should be trivial11:32
juergbithe third TODO point would still be nice to have but less crucial/urgent11:32
tristanjuergbi, Okay yes; I agree11:32
tristanthe third point would still be a great optimization11:32
juergbiyes, definitely11:32
tristanok11:33
tristanSo... juergbi are we about ready to merge that ?11:33
tristanI would like to merge it first because you and tlater have this conflict with artifact data... would be good to have him rebase immediately after merge; and have cache keys recorded in some main artifact.yaml along with some other attributes, like workspaced:11:34
tristanAnd I think tlater is getting also very close to a mergeable branch11:34
juergbiessentially yes. let me test this push change and then it's hopefully good to go11:35
juergbiah, actually, i forgot to make the change from keys.yaml to artifact.yaml11:35
juergbiprobably makes sense to do this now before the merge instead of in tlater's branch11:35
tristantlater, btw if you are going to issue that warning (I think it's a good warning to have too), ensure that Element._push() returns False for that, and then not-pushed artifacts will show up as skipped elements in the queue11:35
tristanjuergbi, sure, sounds like a simple rename but will be easier for tlater to follow11:36
juergbiyes and we don't break artifact metadata that way11:36
tristanthat too :)11:36
tristanjuergbi, side note; you may have noticed PushExistsException, which is an addition of my own11:39
tristanit happens when you try to push the same artifact twice11:40
tristanin our case; it will mean we cannot truly use branches for weak artifacts, they will silently fail as non-fatal pushes11:40
juergbihm, right, this is an issue11:41
tristanjuergbi, you probably have to make sure that when we push the two refs, the strong key push is not aborted just because a weak key already exists11:41
tristanI think though, not using branches is sensible, especially since you already prioritize the ideal key at pull time11:41
juergbiwe should update the existing weak ref, though11:42
tristanbecause we cant really know what is a sensible branch tip, the idea of age also does not make sense11:42
tristanWe could11:42
tristanand it may make sense most of the time11:42
tristanbut not truly, i.e. you could have the same weak key on two different branches11:42
tristani.e. two buildstream project branches, could produce same weak key for a given element11:43
tristanand neither of them is "better"11:43
juergbiyes, there are no guarantees11:43
tristanI think it's not worth updating the branch tip11:44
juergbiok, so let's just fix it such that it doesn't abort due to pre-existing weak ref11:44
tristanespecially that even if the concept of "age" worked "most of the time", we dont know that two separate users had todays master sources of the same buildstream project branch when pushing a weak ref11:45
tristanwe cannot even deduce a correct age11:45
juergbitrue but i would say in most typical scenarios, latest push winning is in average the better end result than the first push winning11:46
tristan:)11:46
tristanyeah11:47
tristanjuergbi, well take it this way; its a bit complicated to do; because you need to know the parent ref you want to push it as (and you dont)11:47
tristanand it's low priority because of all the uncertainties11:47
tristanjuergbi, if you can easily make it do that, go ahead, if not; put it aside and file a bug11:48
tristangood ?11:48
juergbiyes, sounds fine to me11:48
tristanmaybe server side you can force it by getting the latest branch tip11:48
tristanbut then, concurrency is... a bit of an issue11:49
tristanalso, server side changes need a bump in the protocol version, and we need to update buildstream installations on cache servers (not really an issue right now, there exist exactly 2 artifact servers)11:49
juergbi    RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment.  Consult http://click.pocoo.org/python3/for mitigation steps.12:29
* juergbi checks python config12:29
ssam2you need to force LANG=C.UTF-8 and LC_ALL=C.UTF-812:30
juergbissam2: regular xx_XX.UTF-8 locale should work as well, shouldn't it?12:31
ssam2yeah12:31
juergbii get this on the SSH receive side. my locale is only set via login shell, i suppose, but not when directly executing the receive process12:31
juergbinot sure where to properly fix that12:32
ssam2in a way it's a bug in the host OS12:32
ssam2e.g. https://github.com/fedora-cloud/docker-brew-fedora/issues/1412:33
ssam2(comment #3)12:33
juergbita12:34
juergbissam2: do you know where LANG is set on host OS without this issue?12:37
tristanoh :-/12:37
juergbinon-interactive non-login shell doesn't read any bashrc or profile, does it?12:37
tristanI dont think so12:37
juergbii.e., sounds like i have to fix this either in my sshd config or maybe PAM12:37
tristanit's similar situation for cron jobs12:37
ssam2is /etc/locale.conf relevant ?12:38
juergbithat has the UTF-8 locale12:38
ssam2ah, obviously not then12:38
tristanI feel like the right way to avoid this is to use click in a different way12:38
juergbieven the sshd process has LANG set. but sshd doesn't pass it along to the new process12:38
tristanLike, what do we care about utf8 on the bst-artifact-receive side ?12:39
* tristan greps click12:39
juergbiwhile we probably don't need anything beyond ASCII in this instance, i don't see it as an issue in that12:39
tlaterHm, is there a simple way to remove a specific artifact from the cache? I'm guessing this is backed by an ostree repo locally as well.12:41
tlaterJust for testing - not particularly fond of re-building the base system again.12:41
juergbitlater: rm the file in refs/head/...12:41
tlaterThanks :)12:42
juergbirefs/heads/PROJECT/ELEMENT/key12:42
tristanugh12:42
tristanno simple way out of this, this is about python trying to act smart about unicode, rather than whether you actually need to display unicode characters in output12:43
tristantlater, I usually use `ostree refs --list ~/.cache/buildstream/artifacts/ostree` to list the refs12:43
tristansorry + '--repo ~/.cache...'12:44
tristantlater, then I do `ostree refs --repo ~/.cache... --delete <ref>`12:44
tristanyou can also then prune the repo12:44
tristananyway12:44
tristanjuergbi, ok so I made a change to use click instead of argparse, and that was mostly because man page generation was breaking12:45
tristanbut man page generation is half broken anyway12:45
tristanjuergbi, we can revert that to argparse if it's an issue; it seems the click_man extension just takes whichever setup.py 'entry_point' comes first in the arbitrarily ordered list of entry points12:46
tristanso I end up running click_man extension multiple times until it choses to generate pages for `bst` instead of `bst-artifact-receive`12:47
juergbiwell, the whole thing might be a config issue on my system, so i wouldn't jump to that12:47
tristanif it hits bst-artifact-receive, it throws an ugly error if it's argparse12:47
tristanyeah, it's working on gnome7 and on the baserock cache :)12:47
tristanjuergbi, do you want super power on gnome7 ?12:48
juergbii've fixed / worked around it locally now12:49
juergbii could check how sshd is configured on gnome7 but other than that, i don't think i need super powers there12:50
tristanalright12:50
tristanI can easily add you a user if you need though12:50
juergbiok, thanks12:51
juergbipush to local test server is working :)12:51
tristanI'm going to disappear shortly...12:51
gitlab-br-botbuildstream: issue #45 ("Artifact pull jobs say "SUCCESS" even when no artifact was fetched") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/4512:51
gitlab-br-botpush on buildstream@dual-cache-keys (by Jürg Billeter): 20 commits (last: widget.py: Use _get_full_display_key()) https://gitlab.com/BuildStream/buildstream/commit/ebfdb5905659f512ae773b4bf52cff29280fc74612:52
gitlab-br-botbuildstream: merge request (dual-cache-keys->master: Dual cache keys) #55 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/5512:52
tristandamn vte doesnt want to build, with exactly the same runtime, and compile / link line as with flatpak-builder, where it succeeds12:52
juergbihow does it fail?12:53
* tristan will have to dig a bit deeper into flatpak-builder over the weekend12:53
juergbii've pushed a branch update. shall we merge after integration tests pass?12:53
tristanlike this: /usr/lib/gcc/x86_64-unknown-linux/6.2.0/../../../../x86_64-unknown-linux/bin/ld: .libs/libvte_2_91_la-vtegtk.o: relocation R_X86_64_PC32 against undefined hidden symbol `_vte_marshal_VOID__STRING_BOXED' can not be used when making a shared object12:53
juergbiit could definitely use more people testing but i'm not aware of any pending issues12:53
* tristan hits the fancy blue button on the web UI12:54
jonathanmawhrm, strace breaks fuse, by the looks of it.12:54
juergbi\o/12:54
tristanand then: /usr/lib/gcc/x86_64-unknown-linux/6.2.0/../../../../x86_64-unknown-linux/bin/ld: final link failed: Bad value12:54
jonathanmawI can't read the full backtrace because the output overwrites it with the progress indicator12:55
tristantlater, so you are going to have to rebase against that, and if you have a revision by the end of the day, I will try to merge it over the weekend12:55
tristanjonathanmaw, bst | cat12:55
jonathanmawneat12:55
tristanThere is a bug, hitting control-C with bst | cat will cause a BrokenPipeError that is not handled :-/12:56
tristanbut, doesnt really matter much :)12:56
tristanjonathanmaw, and for you... I will take a look also and if there is nothing crazily outstanding, I'll also merge the dpkg plugins12:56
tlaterShould hopefully manage, my only problem at this point is that artifacts stay tainted after re-building. Which is obviously a stupid oversight, but I keep deleting the wrong artifacts -.-12:57
tristantlater, dont forget to remove the coresponding extract directories12:57
tristantlater, that's what is biting you, sorry I didnt think of that earlier12:57
tristantlater, if the artifact already has an extract directory, I think we dont even bother looking at the ostree and just take that12:58
tlaterWell, I just accidentally deleted the full ostree repo for this project, twice in a row. That's not the behavior I saw, unfortunately.12:58
juergbiextract directory names are based on ostree commit instead of cache key now with my branch12:59
tristantlater, I dont understand what is "stay tainted after rebuilding"12:59
tristantlater, if a given cache key produces a tainted artifact; that cache key should always be a tainted artifact12:59
tristanjuergbi, ahhh interesting :)12:59
tlaterBut only that one key, atm new keys also become tainted13:00
tlaterEven without a workspace13:00
juergbiah, that's odd13:00
tristanjuergbi, not entirely necessary I think, though; its all hardlinks13:00
tristanI guess we save a bit of overhead though13:00
juergbitristan: the reason was to avoid creating extract dirs with weak cache names13:00
tristantlater, because they depend on that tainted artifact ?13:00
juergbitristan: we don't always know the strong key from the artifact before we actually extract it13:01
tristanjuergbi, anyway no problem with that :)13:01
tlatertristan: No, because the associated element loads the artifact metadata of the old artifact, and fails to delete it when building a new artifact. Just a bit of debugging...13:02
tristanmmmm13:02
gitlab-br-botbuildstream: merge request (dual-cache-keys->master: Dual cache keys) #55 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/5513:02
gitlab-br-botpush on buildstream@master (by Tristan Van Berkom): 20 commits (last: widget.py: Use _get_full_display_key()) https://gitlab.com/BuildStream/buildstream/commit/ebfdb5905659f512ae773b4bf52cff29280fc74613:02
gitlab-br-botbuildstream: Jürg Billeter deleted branch dual-cache-keys13:02
tristantlater, take cache key resolution time into consideration, and you might want to rebase before resolving that because it may have changed slightly13:03
tristantlater, or wildly13:03
tlaterProbably a good call13:04
tristantlater, probably you want to ensure that... we never try to load the 'workspaced' attribute from an artifact we depend on, until the moment we try to store the element in the local cache13:04
tristanprobably we need some better assertions and policy about when to load artifact metadata across the board13:05
tristanwe have an assertion for public data13:05
tlaterWouldn't that be a bit messy if we needed different kinds of artifact data?13:07
tlater^ Only loading that particular attribute13:07
tristantlater, it's never correct to load artifact metadata before we know the correct cache key for the artifact13:07
tristantlater, and cache keys are resolved late with --track enabled, and also juergbi's branch changes that dramatically13:08
tlaterI guess I'll have to see juergbi's branch13:08
tristanthe "cache key for build" is in fact the effective key13:08
tristanjuergbi, I suddenly think that's a better name :)13:08
tristan"effective key"13:08
juergbiit's only the effective key if we're not using a previously built artifact13:09
tristanreally ?13:09
tristanhow does that change ?13:09
juergbiin non-strict mode, the strong key in the previously built artifact may differ from the strong key that would be in an artifact if we were to build it now13:10
juergbias it may have been built with different dependency versions13:10
tristanjuergbi, I understand 3 keys... weak key (without deps) ... strong key (the idealistic key that we would produce when building in strict mode) ... and the effective key (the artifact we are really going to produce, which only might be the strong key)13:10
tristanIs that not correct ?13:10
juergbithis is correct but once we build the artifact, that effective key is stored in the artifact13:11
tristanjuergbi, and... in general... we only ever want to load metadata from the effective key13:11
juergbiand when we use that artifact in a later session again, we load it from that artifact. and that key may be different than the other 313:11
juergbii.e., if you only look at a single session, you 'only' have 3 keys13:11
tristan:-/13:12
juergbibut across sessions in --no-strict mode, 4 keys are relevant13:12
tristanjuergbi, we are storing the "cache key for build" in the artifact, and caching it as the "cache key for build", correct ?13:13
tristani.e. it's the real cache key of any produced artifact, whether or not it happens to be the ideally strong key13:13
juergbiyes, we always store that key in the artifact metadata when we build an artifact13:13
tristanAnd that is based on the effective keys of the dependencies13:13
tristan+ the weak key13:14
juergbiyes13:14
tristanSo whether or not I built a dependency in this session, my effective key will be the same13:14
tristanas long as I'm producing the same thing of course13:14
juergbiyes13:15
tristanSo what is the fourth key ?13:15
juergbiwhen an element is not being built in the current session. when you load it from the cache, you can't calculate the effective key13:16
juergbiyou have to read it out from the artifact metadata13:16
tristanCorrect13:16
juergbithat read out key is different than all 3 calculated keys mentioned above13:16
juergbior rather, may be13:16
tristanjuergbi, which means you can never know what your effective key is going to be, until you have all the effective keys of your dependencies13:16
juergbicorrect13:17
tristanBut in my mind, that does not mean there is a fourth key13:17
juergbinot even that is sufficient13:17
tristanrather, the 3rd key cannot be allowed to exist until it's known13:17
juergbiwith 3rd key you mean _for_build?13:18
tristanexactly13:18
tristanyes13:18
juergbiyes, that function must only be called if we're building that element13:19
juergbiin that sense, there aren't 4 keys used at the same time, you're right13:19
juergbibut there are 3 functions to calculate a key and the effective key may be different from all of those13:19
tristanThe source of the effective key changes, but it is the same thing, it is the effective key13:19
tristanI dont see it that way13:20
juergbiit could be called that, yes13:20
tristanthat is a confusing way to think about it13:20
tristanreally13:20
tristanI mean: "there are 3 functions to calculate a key and the effective key may be different from all of those"13:20
tristanRather, there are 3 ways to reach an effective key A.) strict mode, know the key right away (unless delayed by --track)13:21
tristanB.) You are building everything in --no-strict mode, so you could technically know what they are going to be before hand (but not really necessary to do so, because who needs a third code path, we can just encode effective key into artifact and read it back later on)13:22
tristanC.) You are downloading pieces as you go along13:22
tristan3 functions to produce the effective key, sure13:22
tristanHowever... regardless of how it was calculated, if the dependencies were stacked the same way, and the inputs are the same, the effective key is always the same13:23
tristancannot differ13:23
juergbiif you're rebuilding the element in question13:24
tristanjuergbi, how is there 3 functions to calculate a key *not counting* the effective key ?13:24
juergbiwe're comparing the case of rebuild or no rebuild, though13:24
tristanI am suddenly very lost, what does "rebuild" entail ?13:25
tristanTo me it means, reproducing the same thing on another computer, or "I lost an artifact in my cache"13:25
juergbisorry, should be just build or no build (use cache)13:25
tristanOk13:25
juergbiweak key, idealistic strong key, and for_build are the three functions that calculate keys. _from_artifact is the function that loads the effective key13:26
juergbiit may be possible to merge the latter two with appropriate checks of the context13:26
tristan_from_artifact() and _for_build() are not the same, but they both produce the effective key13:26
tristanI see what you mean now13:26
tristanIt would make sense if we can look at those as the same yes13:27
juergbiyes, might clear up some confusion13:27
tristani.e. if we could ensure that every time _for_build() is used, it only ever takes _from_artifact() as input13:27
tristanmaybe that's a stretch13:27
tristanjuergbi, sorry I lost my mind for a moment there, I mean... it's a very sensitive/crucial thing13:28
tristanneeds to be understood well13:28
juergbi_from_artifact() can't be used yet when _for_build() is used13:28
juergbias there is no cached artifact at that point13:28
juergbibut because of this, we might actually be able to merge them into a get_effective_key() function that always does the right thing13:29
tristanjuergbi, _from_artifact() (of a dependency) *is* used to produce _for_build() (of the element we're building), isn't it ?13:29
juergbiyes, for dependencies13:29
juergbii meant, you can't use it for the element itself13:29
tristanthat's what I meant by "if we could ensure that every time _for_build() is used, it only ever takes _from_artifact() as input"13:30
tristanof course13:30
juergbithat's already the case13:30
tristanOk that's a good thing13:30
tristanjuergbi, because it's not strictly necessary13:30
tristanjuergbi, when you know in advance that you will be building an element, you can at that time calculate cache keys of reverse dependencies based on it's _for_build() key13:31
tristan(cause you know what the _from_artifact() *will be* for that element)13:31
tristanBut, if it's already the case that it doesnt happen, that is cleaner I think, yes.13:31
juergbiyou can do it once you're sure you have all the dependencies in the version you will actually use13:31
juergbii.e., we couldn't do it early on13:31
juergbidue to pulling13:32
tristanwe could if you dont have a pull queue13:32
juergbiyes13:32
tristanor once elements have all passed through the pull queue :)13:32
tristanBut lets not13:32
tristanI like that invariant anyway13:32
juergbiagreed, don't add unnecessary code paths13:32
tristanyeah, anyway I had the experience already of adding code to ensure exactly that premature cache key calculation wouldnt happen (i.e. track queues in play)13:34
tristanSo I get paranoid about that, to me it feels like there is always a risk that an element has cached a key value before it's correct13:34
tristananyway, enlightening conversation; I think that wrapping these around an _effective_key() method which has a decent comment; and using the effective key, might help things be more readable13:35
juergbii actually added explicit recalculate calls to the pullqueue to be sure13:36
tristanwhen you end up reading the note around _effective_key(), then you understand the gruesome details surrounding that, but not before hehe13:36
tristanyeah13:36
tristanI saw that in the branch13:36
* tristan has to exit now13:37
*** tristan has quit IRC13:40
tlaterjuergbi: What is supposed to happen in `_get_cache_key_from_artifact` if meta['keys'] is not defined?13:45
juergbithat would be an incompatible cached artifact13:46
tlaterAh, never mind, I had a local artifact cache I forgot to remove13:46
juergbiwe really need to add artifact/key versions to avoid such issues13:46
tlaterProbably, yes13:46
tlaterFixed my issue :)14:19
gitlab-br-botpush on buildstream@workspaces (by Tristan Maat): 20 commits (last: element.py: Use _get_cache_key_for_build() for artifact.yaml) https://gitlab.com/BuildStream/buildstream/commit/f43aa7bbbf8cc02faecbeb7a279d47bf97f6beca14:32
gitlab-br-botbuildstream: merge request (workspaces->master: WIP: Workspaces) #50 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/5014:32
tlaterShould only build dependencies affect the tainted status of an element?14:41
ssam2hmmmmmmm14:44
ssam2I think 'element' is too generic a concept for us to assert that14:44
tlaterI suppose, some script elements may rely on their dependencies doing something14:45
tlaterThen again, that's also a build dependency14:45
ssam2yeah. and if the element is a system image, for example, then it includes elements that might be considered runtime dependencies of something else14:45
juergbibuild dependencies and their runtime dependencies14:45
juergbibut direct runtime dependencies shouldn't matter14:45
juergbii.e., same logic as what's used to calculate the cache key14:46
juergbior in code that would be: element.dependencies(Scope.BUILD)14:47
tlaterOh, that includes build dependencies' run dependencies?14:48
juergbiyes, they are excluded only if you specify recurse=False14:52
gitlab-br-botpush on buildstream@sam/artifactcache-preflight-check (by Sam Thursfield): 2 commits (last: Add initial `bst push` command) https://gitlab.com/BuildStream/buildstream/commit/2594a3109c08013b0c161f524f83ec2942657f4114:52
gitlab-br-botpush on buildstream@workspaces (by Tristan Maat): 1 commit (last: element.py: Make element dependencies affect taint status) https://gitlab.com/BuildStream/buildstream/commit/4f343992725db461d17c43fca198e52f0c0c7fcf15:01
gitlab-br-botbuildstream: merge request (workspaces->master: WIP: Workspaces) #50 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/5015:01
gitlab-br-botpush on buildstream@workspaces (by Tristan Maat): 1 commit (last: element.py: Make element dependencies affect taint status) https://gitlab.com/BuildStream/buildstream/commit/7227cd41b8f593357df2c4c10d16c5b48c421de115:01
gitlab-br-botbuildstream: merge request (workspaces->master: WIP: Workspaces) #50 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/5015:01
gitlab-br-botbuildstream: merge request (workspaces->master: Workspaces) #50 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/5015:26
gitlab-br-botpush on buildstream@sam/bst-push (by Sam Thursfield): 1 commit (last: Add initial `bst push` command) https://gitlab.com/BuildStream/buildstream/commit/b9bfe1c727b2c39bb824e3369ed9edc69b95a11815:56
ssam2oh dear15:57
ssam2https://gitlab.com/baserock/definitions/-/jobs/2254081115:57
ssam2this doesn't look good15:57
ssam2i did the same thing locally (built the system that is in the cache) and didn't get any errors15:58
ssam2but, this does look like a bst bug...15:58
ssam2maybe a race?15:58
ssam2seems to be non-fatal, actually...15:58
tlaterIt might be a cache issue, considering the missing files15:58
*** jonathanmaw has quit IRC16:02
*** tlater has quit IRC17:07
gitlab-br-botbuildstream: merge request (sam/bst-push->master: Add initial `bst push` command) #56 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/5617:22
gitlab-br-botpush on buildstream@sam/artifactcache-preflight-check (by Sam Thursfield): 20 commits (last: context.py: Add strict_build_plan option) https://gitlab.com/BuildStream/buildstream/commit/3f065aa70d9d5bb4088ef43b0552e1190764a54e17:24
gitlab-br-botbuildstream: merge request (sam/artifactcache-preflight-check->master: Check for write access to remote artifact cache early on in the pipeline) #57 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/5717:29
gitlab-br-botbuildstream: issue #46 ("Strange artifact pull failures on GitLab CI") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/4617:32
*** ssam2 has quit IRC17:46
*** tristan has joined #buildstream18:39
*** ChanServ sets mode: +o tristan18:43
*** tristan has quit IRC22:06
jjardon[m]mmm, Is this a bug or I have configured sth wrong? https://gitlab.com/baserock/definitions/-/jobs/2258659023:45

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!