IRC logs for #buildstream for Tuesday, 2019-05-21

*** swick has joined #buildstream00:00
*** nimish2711 has quit IRC00:09
*** nimish2711 has joined #buildstream00:56
*** nimish2711 has quit IRC02:39
*** tristan has quit IRC06:50
*** tristan has joined #buildstream07:09
*** bochecha has joined #buildstream07:48
gitlab-br-botBenjaminSchubert approved MR !1347 (chandan/coverage-doesnt-need-deps->master: tox.ini: Coverage does not need module installed) on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/134708:21
gitlab-br-botjennis approved MR !1347 (chandan/coverage-doesnt-need-deps->master: tox.ini: Coverage does not need module installed) on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/134708:24
*** jonathanmaw has joined #buildstream08:59
*** phildawson_ has joined #buildstream09:07
*** tristan has quit IRC09:08
*** lachlan has joined #buildstream09:36
gitlab-br-botjuergbi approved MR !1325 (jonathan/cached-to-artifact->master: Move Element.__*cached variable to Artifact class) on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/132509:37
*** lachlan has quit IRC09:40
*** lachlan has joined #buildstream09:58
*** lachlan has quit IRC10:02
* juergbi is confused10:14
juergbiI stopped a hung job and now gitlab doesn't allow me to retry (there is not even a button anymore)10:15
juergbiah, have to retry on the pipeline level instead of on the job level if it was the last job10:15
juergbinot exactly intuitive to remove retry button from the job in that case10:16
jennisjuergbi, yes I had this earlier too10:23
tpollardjonathanmaw: marge is dead again, you will need to manually merge10:25
jonathanmawok10:26
gitlab-br-botjonathanmaw closed issue #1015 (Move Element.__*cached variable to Artifact class) on buildstream https://gitlab.com/BuildStream/buildstream/issues/101510:45
gitlab-br-botjonathanmaw merged MR !1325 (jonathan/cached-to-artifact->master: Move Element.__*cached variable to Artifact class) on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/132510:45
*** tristan has joined #buildstream10:46
tpollard\o/10:47
*** lachlan has joined #buildstream11:01
*** lachlan has quit IRC11:04
*** lachlan has joined #buildstream11:06
*** pointswaves has joined #buildstream11:24
tpollardtristan: would you still be ok with seeing a user config option for disabling the status 'bar' (I'd take that as the complete status header)?11:34
tpollardprobably have value in being a bst main option too11:35
*** ChanServ sets mode: +o tristan11:38
tristantpollard, I think it could be alright... it should be handled in the same way as all options of course... probably should have --log-smth on the CLI and be an option in the logging: section of the config file for consistency with other logging options11:40
tristantpollard, however11:40
tristanI *think* that it's technically unneeded, I think you can provoke that exact behavior without any option11:40
gitlab-br-botcs-shadow merged MR !1347 (chandan/coverage-doesnt-need-deps->master: tox.ini: Coverage does not need module installed) on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/134711:40
tristanoh well, not precisely, you would also lose interactive prompts11:41
tristantpollard, i.e. its a very similar mode as `bst --colors ... | cat`11:41
tristanhmmm, due to clicks handling of boolean options it might not be prefixable with --log-11:42
tristanlemme check current options, actually technically we have --color / --no-color already which is not --log-color / --log-no-color11:43
tristantpollard, probably we can have --status/--no-status on the CLI, defaulting to whatever value is in the config; and have `status: True` in data/userconfig.yaml (the defaults)11:44
tristanright, I was wrong about --log prefix... we don't have anything like that (--verbose/--no-verbose, --debug/--no-debug, --message-lines, --error-lines, --colors, --no-interactive... all examples of logging related options)11:46
tpollardtristan: yep I think --status/--no-status makes sense with the default as true in userdata11:48
tpollardunder the logging options11:48
tpollardwhich I'd have set Status._header to None, queried in render()11:51
tristantpollard, I think there is already conditions under which we either create the status bar or not11:53
cs-shadowhi, it seems like several CI jobs get "stuck" these days. Any clues as to what's causing that?11:53
tpollardcs-shadow: raoul is looking at https://gitlab.com/BuildStream/buildstream/issues/102311:54
tristantpollard, it's not created if were not connected to the terminal and/or the terminal we're created on does not support the ANSI escape codes that it requires11:54
cs-shadowtpollard: Thanks! I think I am referring to a different issue. For example, https://gitlab.com/BuildStream/buildstream/-/jobs/216320678. It has been started 15 mins ago but nothing seems to be happening there. Is that perhaps because we have too many jobs running in parallel and it's unable to find resources?11:56
tristantpollard, so you want to first create the user config and parse it in _context.py, then add a main option in _frontend/cli.py, and finally you want to add that option to the mapping here: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_frontend/app.py#L17411:56
tristantpollard, those come together to get the right order of precedence of defaults and such... that happens early enough that you can decide whether or not a Status() object is needed or not11:57
tristanlooking at the mapping https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_frontend/app.py#L174, one cannot help to notice the eyesore11:58
tristanthat we are accessing private _strict_build_plan11:58
tristanone has to wonder if that is really private, I think it is actually11:59
tpollardtristan: Yep, I did pull/cache buildtrees so it's part of the codebase I'm comfortable with to some extent :). Thanks for pointing out that it probably makes sense to capture the need for Status() earlier12:01
tristanHeh :)12:01
tpollardcs-shadow: that one would suggest a resource bottleneck to me. One thing I would like to handle in a more automated fashion would be to stop a pipeline executed if another push was sent within a certain threshold12:03
tpollardif a push is sent early enough the git checkout fails obviously, but it's still wasted resources12:03
tpollardespecially with no enforced policy to manually cancel again non MR branches12:03
tpollard*against12:05
*** lachlan has quit IRC12:11
tpollardtpollard: ah yes, render & clear are noop depedant on the terminal init return. That could be the most suitable places for the option capture too12:28
*** lachlan has joined #buildstream13:41
*** lachlan has quit IRC13:57
*** lachlan has joined #buildstream14:01
KinnisonI have a question about project second stage loading14:06
KinnisonUnder what circumstances can a project not be in a position to undergo second stage load (i.e. become fully realised)?14:06
*** lachlan has quit IRC14:08
gitlab-br-botaevri opened (was WIP) MR !1348 (aevri/fix_logging_regex_test->master: tests/frontend/logging.py: fix error message regex) on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/134814:09
KinnisonBasically I'm trying to work out how early I can call someproject.ensure_fully_loaded()14:11
tristanKinnison, good luck ;-)14:13
tristanheh14:13
tristanKinnison, valentind implemented the dual stage load... and I can tell you the reason for it14:13
*** lachlan has joined #buildstream14:14
tristanBasically, (@) includes has to work cross-junctions14:14
tristanKinnison, so the first pass load, is for resolving junctions, once junctions are all resolved - cross junction includes can be processed, meaning the rest of the project can be resolved14:14
* tristan created https://github.com/ostreedev/ostree/pull/1862 today14:16
*** lachlan has quit IRC14:17
Kinnisontristan: So basically if I could come up with a clean way to force-resolve the junctions as early as possible, I could fully resolve the projects and the junctions first, and then load the rest of the elements?14:17
tristanwasn't all that hard actually ... in case anyone is interested, that will let us do `bst checkout --tar - | ostree commit --tree=tar=-`, and should be the first step towards supporting similar in flatpak tooling14:17
tristanKinnison, Well... that exercise might end up mostly being finding a more sensible name for "first pass" and "second pass"14:18
tristanWhich... I would definitely support :)14:18
tristanIf it is even cleaner as a result of refactoring, always a bonus14:18
Kinnisontristan: current, afaict, we defer "second pass" to Element._new_from_meta essentially14:18
* Kinnison wants to do more work in the loader, which means working out how to second-pass as *early* as possible14:19
tristanMmmmyeah14:19
tristanKinnison, I didnt delve too deeply into that proposal honestly14:19
Kinnisonwhich probably means processing the includes on the project in such a way that if there's a cross-junction include, that gets noted, then we load that junction element, and repeat, until we're able to resolve fully14:19
Kinnisonat that point we have the parent project, its junction elements, the subprojects, their junctions, blahblah, all fully resolved14:20
Kinnisonthen we can begin to load the targets14:20
Kinnison(note, only the junctions mentioned by the project.conf and any includes there-from)14:20
Kinnisonso that *ought* to be okay14:20
tristanKinnison, and you're not worried about cementing the load process into the loader and out of the elements at all ?14:21
Kinnisontristan: I think the load process *ought* to be in the loader14:22
tristanThat is splitting hairs a bit, or arguing over terminology14:22
tristanKinnison, part of the load process will always be in Plugin.configure()14:22
Kinnisonat the end of the loader, we ought to have the fully realised set of data necessary to construct the elements, ideally without having to re-read yaml or re-composite things14:22
tristani.e. it cannot be owned by the core14:22
KinnisonPlugin.configure() is about retrieving information that has already been loaded14:23
Kinnisonno?14:23
tristanRight, that is a question of terminology indeed14:23
tristanBut now, instead of the the base Element class knowing what it needs from the loaded yaml and handling that, you've moving that into the loader I guess14:23
KinnisonBasically I want to move out of Element anything which is actually part of the *format*14:24
tristanas I said, I havent delved too deeply into that - my only twitch was that "Is the element/source going to have less freedom to evolve now that things get cemented into this loader thing ?"14:24
Kinnisonand which bits of what yaml get composited together in what order and when is the *format* IMO14:24
KinnisonOnce you're beyond the *format*, it's up to plugins and I don't mind14:24
KinnisonBut if we've defined it in the format specification in the docs, then IMO the loader should deal with it14:25
tristanRight, but there is all the composition, for that you get out of the loader domain and into the element domain IMO14:25
KinnisonI'd say no, the element is responsible for consuming the composed format14:25
tristanWell, you are aiming to change is so that it would be that, yes14:26
KinnisonRight, currently your statement is currently, I'm aiming to make it so that the loader is king for all of that14:26
KinnisonEventually then the loader may be able to do cross-element optimisation by not repeating composition work which could be shared14:27
tristanRight, which does make me twitch a bit - I understand this is in the interest of better caching14:27
KinnisonNot necessarily caching so much as not-repeating-work14:27
tristanbut I don't feel it is more sensible14:27
gitlab-br-botcs-shadow opened (was WIP) MR !1322 (chandan/src-directory->master: Move source from 'buildstream' to 'src/buildstream') on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/132214:27
KinnisonYou don't feel that right now Element and Loader are too tightly coupled via the 'format' specification ?14:27
tristanKinnison, We shouldnt be repeating work as it is14:27
tristanKinnison, I.e. element class level YAML overrides project YAML, and gets cached once14:28
tristan(cached in memory)14:28
KinnisonYes, that's Plugin.__defaults14:28
*** lachlan has joined #buildstream14:28
tristanKinnison, I don't feel that the definition of what the format is belongs to the loader, to answer your question, no14:29
KinnisonOkay, that's helpful14:29
Kinnisonso i need to try and rationalise my thought process and put it out there for debate14:29
Kinnisonso we can find a middle ground14:29
tristanKinnison, I feel that the definition of what an Element's format is belongs to Element, and what a Source is, belongs to Source, and whatever is common, belongs to Plugin14:29
tristanKinnison, But - I should really say this is a only a twitch, I cant allow myself to feel super strongly about that, it's my gut intuition of what is sensible14:30
KinnisonSo code-logic-wise I don't necessarily disagree, but invocation time and management I think might belong to Loader but I'm not sure - it may belong to Project14:30
KinnisonI need to think harder about where the behaviour is, esp. now you've clarified what the first-pass vs. second-pass on project is14:30
Kinnisonthank you14:30
tristanI sort of recognized it was a twitch when I originally read the email, and refrained from any reaction, and though - maybe I will understand more when I see the code, and probably the result will be equally sensible :)14:31
tristans/though/thought14:31
KinnisonI don't want to make things worse, for sure :D14:32
*** lachlan has quit IRC14:32
KinnisonI was thinking that if I moved the logic, it'd be Element's logic => MetaElement and Source's logic => MetaSource14:32
Kinnisonso there'd still be *some* separation of Loader vs. Meta{Element,Source}14:33
*** tristan has quit IRC14:57
*** tristan has joined #buildstream15:20
WSalmonI think benschubert, and aevri have stages running on pipelines that are not latest, given that we dont have infinite runner and that gitlab dose not kill off your old pipeline if you push another, could i make a gentle reminder for us to all be careful to cancel unneeded pipelines  (I know im not always great at this, and i know not every one realises that they dont get auto killed and i know some times people have a good reason to keep them15:24
WSalmonrunning)15:24
*** ChanServ sets mode: +o tristan15:26
* tristan sometimes gardens them and ruthlessly kills other peoples pipelines that are not 'latest' of their branch15:26
tristanhavent had a complain yet ;-)15:26
tristan*complaint15:26
aevriAha, I wasn't aware of that before, sorry to take up resources :)15:26
aevriI've cleaned mine up from here https://gitlab.com/BuildStream/buildstream/pipelines?scope=running&page=115:26
WSalmonlots of people aren't :) so i thought a reminder might be a good idea15:27
aevriI do notice a certain wsalmon also seems to have a 'non-latest' job running :P :)15:28
tristanwe could probably script a bot that kills them, but then in some edge cases I actually *want* to see the results of different pipelines on the same branch intentionally15:28
WSalmonhahaha, i thought i had checked15:28
WSalmonits so easily done15:28
aevriyeah, unfortunately that it's the default15:28
juergbiI also occasionally cancel non-latest jobs, especially if there are lots pending or running15:29
juergbiwould be good to solve the stuck job issue soon15:29
WSalmonyes that one was one of our randomly hanging ones from much earlier in the day i wonder if our 22h hour time out is a good idea given most of our jobs are sub 1hr15:31
juergbiI'm wondering whether it might be related to parallel testing. the stuck job issue started happening a lot after enabling parallel testing but it's still possible that it wouldn't happen without parallel testing15:31
juergbiWSalmon: I think we have the long timeout because of the overnight tests. is there a way to have different timeouts for overnight and regular?15:31
WSalmonI would hope so, i will go have a look15:31
tpollardthe issue raised on raoul's ticket definitely only happens to me if I run locally in parallel15:32
tpollardhopefully the bug chasing pays off15:33
tristanWSalmon, I think the 22h timer is excessive, but we need a longer timeout for the nightly tests which build freedesktop-sdk without hitting any artifact cache15:33
raoulYeah I've never reproduced it not in parallel15:33
tristannot sure if we can configure the timeout on a per-jobtype basis15:33
raoultristen you can set timeouts per test with pytest-timeout15:34
tristanraoul, I mean the gitlab timeout15:34
juergbiraoul: in that case maybe we should disable parallel testing in CI until we find a solution15:34
tristanbut that is also possible I guess15:34
juergbithat said, I haven't seen tests hang locally yet despite always running them in parallel15:35
raoulme and tpollard have15:36
WSalmonso, according to the docks you could set it at a runner level, so we could have two bastions on the bastion server and get the nightly to build on them and give them the longer time out, it would be a bit more effort. (I assuem that they both take runners from the same pool and that other things would play ball)15:36
WSalmonjjardon, did you set up all the runners, would this be a pain to do?15:36
benschubertlocally I also have often tests hanging15:37
benschubertespecially the umask tests15:38
juergbioh, interesting. wondering why I don't see that issue15:49
juergbinumber of workers (and timing) might make a big difference, in case a specific sequence of tests in a particular worker triggers the issue15:50
tpollardmaybe 16 runners is the magic number juergbi15:50
tpollardthreadripper is just too good15:50
WSalmoni presume juergbi also has a very nice disk, and that the runners and benschubert's wsl have very slow disk15:51
WSalmon*I know benschubert's is fast disk but it has to go via wsl15:51
tpollardWSalmon: don't make me even more envious15:52
benschubertWSalmon: that's even when running on my Fedora home system15:52
WSalmonoh fair15:53
WSalmonbenschubert, have you had them fail and hang? or just one?15:56
benschuberthang only15:58
juergbibenschubert: what tox options do you commonly use on your Fedora system? i.e., how many workers and with or without --integration?16:12
gitlab-br-botBenjaminSchubert approved MR !1322 (chandan/src-directory->master: Move source from 'buildstream' to 'src/buildstream') on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/132216:15
gitlab-br-botBenjaminSchubert unapproved MR !1322 (chandan/src-directory->master: Move source from 'buildstream' to 'src/buildstream') on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/132216:15
*** lachlan has joined #buildstream16:16
*** bochecha has quit IRC16:17
benschubertjuergbi: tox -e py37 -- -x --no-cov is usually what I run16:17
*** phil has joined #buildstream16:18
juergbibenschubert: that'd be without parallel testing, though16:18
juergbii.e., do you see hangs without parallel testing or do you enable them some other way?16:19
benschubertjuergbi: yup, I get hangs even without parallel testing16:19
*** phildawson_ has quit IRC16:19
juergbioh, good to know. so much for that theory16:19
juergbiraoul: ^^16:19
raouloh, at a similar sort of rate?16:19
*** lachlan has quit IRC16:19
raoulcause I've never had it16:19
benschubertraoul: I have to cancel roughly 20% of my test runs16:20
raoulwow that's pretty bad16:20
raoulthis bug just doesn't make any sense :(16:20
benschubertI can have a better look, but might need to wait, I can only reproduce on my home laptop x016:20
raoulis it always hanging after the zip.py::test_use_netrc[HTTP] test?16:21
benschubertraoul: I'm more seeing hangs locally on umask tests, and on the CI on the netrc onces16:22
benschubert*onces16:23
benschubertI however have no idea what can be going wrong :/16:23
raoulah, yeah I've been trying to track down the netrc one16:23
benschubertgood luck :/16:24
*** tpollard has quit IRC16:42
gitlab-br-botBenjaminSchubert approved MR !1322 (chandan/src-directory->master: Move source from 'buildstream' to 'src/buildstream') on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/132216:43
gitlab-br-botaevri approved MR !1322 (chandan/src-directory->master: Move source from 'buildstream' to 'src/buildstream') on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/132216:49
aevriIt's possible we have an intermittent bug in cache deletion: https://gitlab.com/BuildStream/buildstream/-/jobs/21648608216:49
aevriIt failed "test_never_delete_required_track"16:49
aevriWith this:     [00:00:00] FAILURE dep2.bst: Directory not found in local cache: [Errno 2] No such file or directory: '/builds/BuildStream/buildstream/.tox/py37/tmp/popen-gw0/test_never_delete_required_tra0/cache/cas/objects/33/ee7006f0398c9ceb86b4a5deb2753bb9dd5f13f33d4418589c0f92d173400e'16:50
gitlab-br-botaevri opened (was WIP) MR !1337 (aevri/set_message_unique_id->master: jobs: refactor, use new set_message_unique_id) on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/133716:53
*** jonathanmaw has quit IRC17:03
*** lachlan has joined #buildstream17:08
gitlab-br-botcs-shadow closed issue #1009 (CI doesn't test sdist packaging includes necessary data files) on buildstream https://gitlab.com/BuildStream/buildstream/issues/100917:08
gitlab-br-botcs-shadow merged MR !1322 (chandan/src-directory->master: Move source from 'buildstream' to 'src/buildstream') on buildstream https://gitlab.com/BuildStream/buildstream/merge_requests/132217:08
cs-shadowPublic service announcement: Now that ^ is merged, all buildstream source is now inside a `src` directory. There might be some manual rebasing needed if your branch was adding/removing files. Other than that, if you notice any incorrect links etc, please let me know17:13
*** phil has quit IRC17:40
*** lachlan has quit IRC17:45
*** xjuan has joined #buildstream17:57
*** slaf_ has joined #buildstream18:24
*** slaf_ has joined #buildstream18:25
*** slaf_ has joined #buildstream18:25
*** slaf_ has joined #buildstream18:25
*** slaf has quit IRC18:26
*** slaf_ is now known as slaf18:27
*** bochecha has joined #buildstream18:34
*** pointswaves has quit IRC18:54
*** bochecha has quit IRC19:18
*** pointswaves has joined #buildstream19:22
*** bochecha has joined #buildstream19:47
*** slaf has quit IRC19:58
*** toscalix has joined #buildstream20:25
*** toscalix has quit IRC20:36
*** pointswaves has quit IRC20:36
*** bochecha has quit IRC20:39
*** slaf has joined #buildstream20:39
*** slaf has joined #buildstream20:40
*** slaf has joined #buildstream20:40
*** slaf has joined #buildstream20:40
*** slaf has joined #buildstream20:41
*** slaf has joined #buildstream20:41
*** slaf has joined #buildstream20:41
*** slaf has joined #buildstream20:41
*** slaf has joined #buildstream20:42
*** slaf has joined #buildstream20:42
*** bochecha has joined #buildstream20:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!