IRC logs for #buildstream for Friday, 2018-09-07

*** leopi has joined #buildstream04:41
*** tristan has joined #buildstream05:21
*** leopi has quit IRC05:36
*** leopi has joined #buildstream06:11
*** iker has joined #buildstream07:25
*** finn_ has joined #buildstream07:55
*** finn has quit IRC07:57
*** finn_ has quit IRC07:57
*** finn has joined #buildstream07:57
*** finn has quit IRC08:02
*** finn has joined #buildstream08:03
tristanhttps://gitlab.com/BuildStream/website/merge_requests/67 <-- This adds a "Documentation" menu item which links directly to http://buildstream.gitlab.io/buildstream/08:04
qinustyLooks good to me08:06
tristantlater[m], My tests are failing for the crash fixer; I think because the tests are invalid and need fixing - now I have a cleanup job running08:06
*** rdale has joined #buildstream08:06
tristanSo I set my cache quota to 16GB with a 15GB cache, and tracked a new freedesktop-sdk junction to get a nice fat update08:07
tristanAnd it caused the cache to grow and hit the limit08:07
tristanNow I have a cleanup job which has been running for... get this... 20min so far and still running !08:07
tristan22 actually08:08
tristanMy cache is now 13GB08:08
qinustytristan, feel free to chip in to https://gitlab.com/BuildStream/website/issues/21 when you a chance. I'm trying to collate the argument on both sides.08:08
paulsher1oodi'll ask again... shouldn't cleanup run on startup, in the background?08:08
paulsher1oodybd cleanup regularly takes up to five minutes on reasonable systems08:09
*** phildawson has joined #buildstream08:09
tristanpaulsher1ood, you dont know if you are going to need to cleanup really until you do cleanup, and ideally you dont want to cleanup often, or more than you need to08:09
tristanYes, well I'm complaining about performance with the CAS prune right now08:09
paulsher1oodmy proposal considers that, afaict08:10
paulsher1oods/proposal/suggestion/08:10
tristanWe want to do the least amount of cleanup possible, right ? In case you need the artifacts due to switching branches, you keep around as much as possible with LRU08:10
tristanStill we also want to balance that with doing not too many cleanups; so we try to make enough space when it happens08:11
tristanNow, if you do it at startup, it doesnt mean you wont anyway have to do it again after building or downloading 400 builds08:11
tristanI do agree that it makes sense to evaluate and possibly launch a cleanup at startup, too08:13
paulsher1oodhmm. i'm assuming (maybe wrong...) that the machine is sized to perform at least one full transaction (download, build, deploy)08:13
tristanIt might be that we do, I didnt test that08:13
tristanI just reached the quota in mid-flight08:13
paulsher1oodhad there been a cleanip a) after previous run, or b) on startup?08:13
tristanWell that is considered too; We know all of the artifacts which are needed in the build, and we *never* delete artifacts that are needed for the build in context08:14
tristanWe instead bail out at that point08:14
paulsher1oodybd does the same iirc08:14
tristanSo BuildStream starts with say; you have 15GB of artifacts, and your limit is set to 20GB... We dont know how much space the new artifacts will take08:15
tristanSo we dont cleanup until you approach that limit08:15
tristanIf you start up with the limit reached, then we should indeed make room at startup08:15
paulsher1oodthat's wrong imo08:16
tristanAssuming that creating the artifacts is more expensive than keeping them around, we would want to delete as less as possible08:16
tristanOf course, you never want to just wipe the whole cache08:16
paulsher1oodthere should be a 'get me this much space if possible' target on startup08:17
*** finn has quit IRC08:17
tristanRegardless if we want to bring the cache down to a lower end value (this target exists at startup as well as later on), you still don't know that you won't need to clean up later again08:18
tristanSo, while I agree it makes sense to do it once at startup, I disagree that it will never happen again during a build which uses an unpredictable amount of disk space08:18
tristanAt which point, whether it happened at startup or later, is not too big of a deal, so long as it doesnt happen too often and doesnt take too much time when it does08:19
tristanright now, it's clearly taking way too much time, whether it is at startup or not08:20
tristanqinusty, I don't know why you want to further argue this in issue 21, honestly08:21
tristanqinusty, better to just let it rest right ?08:22
paulsher1ood... in the background. ybd forks to do this.08:22
tristanYeah, that is exactly a problem, CAS is unsafe for this08:23
paulsher1oodoh?08:23
tristanParallelism of cache additions and pruning would be great08:23
tristanWas not possible with OSTree and has not been implemented with CAS08:23
paulsher1oodit doesn't support that? why not?08:23
paulsher1oodybd/kbas *does* support it08:23
qinustyPrimarily, I'm just trying to avoid further conflict on the topic. It was raised due to a discussion by toscalix on an MR of mine.08:23
paulsher1oodqinusty: discussion is not conflict08:24
paulsher1oodthis stuff is nontrivial, it's unsuprtising that people may not immediately agree08:24
*** finn has joined #buildstream08:25
tristanpaulsher1ood, problem is when you have file deduplication; and refs, then you (A) Want to delete all the deduplicated objects reachable by nothing other than this artifact which you know you can remove... But (B), you have new commits at the same time which start to refer to these objects you are removing08:25
qinustyAgreed. Bad wording by me, I wasn't too sure when creating the issue that we had resolved this discussion fully.08:25
qinustyTherefore the issue was raised for further discussion.08:25
qinustyFor those which did not fully agree, to read the arguments for both sides.08:25
paulsher1oodthe support in ybd/kbas is achieved by the single trick Kinnison explained to me (ie. atomicity via directories)08:25
paulsher1oodis it possible to adjust cas so that artifacts are directories not files?08:26
tristanpaulsher1ood, not really, it's ultimately easy because artifacts are not deduplicated at all; you only have an extract directory to atomically move, and a tarball to atomically delete08:26
paulsher1oodtristan: i'm confused. "it's ultimately easy? in bst? what's the problem then?08:27
tristanInstead with CAS, you have hashed objects (the actual files, hashed by their content) and symbolic refs, and you reconstruct the directory on demand using the deduplicated hash objects08:27
tristanSorry, it's ultimately easy with YBD/kbas08:27
paulsher1oodok08:28
tristanbecause every artifact is separate and contains all the data, no deduplication08:28
tristanHere we have to prune a deduplicated store, removing only the files which are not used by any other artifact08:28
tristanin order to remove an artifact08:28
tristanI think discussion with juergbi and others around this resulted in talk of ref counting of the objects08:29
*** toscalix has joined #buildstream08:29
paulsher1oodewww :)08:29
tristanwhich could allow for parallel commit / prune08:29
tristanwell, considering the problem, it seems like the right approach - it's just not as dead simple as we would love it to be08:30
paulsher1oodso we have lru. if we had also a representation of all artifacts needed by any active jobs, we could prune at will, is that correct?08:31
juergbithe concurrent commit and prune shouldn't be that difficult to handle08:32
juergbiensure mtime is updated for items relevant to the current commit and then make sure we never purge objects that are very recent08:32
paulsher1oodcan we guarantee that any file is deleted, any job(s) finding they need it will reconstruct it? and that one pristine version of the file will land when the reconstruction(s) have succeeded?08:32
KinnisonI have spent quite a bit of time thinking about how to garbage collect a CAS while still allowing its use, but I've been thinking on the server side, not local caches.  However as juergbi says, it's mostly about knowing your anchors, chasing them through, and then obeying time.08:33
paulsher1oodok i'll drop out, since i'm sure that between Kinnison, juergbi and tristan there's enough expertise thinking about this08:33
Kinnisonpaulsher1ood: Yes, though it may require arbitrary amounts of work and relies on the concept/hope that an artifact build is reproducible08:33
paulsher1oodmay the force be with you :-)08:33
* Kinnison doesn't want to stop paulsher1ood contributing08:34
* Kinnison is, unfortunately, at the whiteboard and waffle stage of considering garbage collection in his CAS implementation, but hopefully I'll write down my thoughts into something more concrete in the near future08:35
paulsher1oodunderstood, i'm not suggesting that i'll stop contributing :-)08:35
*** leopi has quit IRC08:36
tristanpaulsher1ood, deleting a part of an artifact that might be needed later in the build is one thing; at the time of the next build we could reconstruct it, but deleting a part of an artifact which is already needed by a previous build would be more catastrophic08:37
paulsher1oodprevious build should always be ready to reconstruct?08:38
tristanWell, we need to stage the previous artifacts which are depended on by later builds right ?08:39
tristanOtherwise we find ourself in a loop where we have to reconstruct things on demand08:39
tristanAlso a possibility, but seems quite undesirable :)08:39
tristanRebuild artifact we already built, because we cleaned something up in between, and then need the artifact again08:39
paulsher1oodi fear that's what ybd does, in effect :)08:40
tristananyway, I agree juergbi and Kinnison have better ideas; just don't want you to think your ideas are unwelcome :)08:41
Kinnisontristan: as I said, it's "safe" albeit requiring potentially arbitrary amounts of work :-)08:41
tristanpaulsher1ood, I suspect that with the way artifacts work in YBD, that doesnt happen; because you will probably lock all the cache keys you need and never delete those in your cleanup thread08:42
tristanpaulsher1ood, but yeah the recursive nature of the code indicate that might be possible with YBD :)08:42
paulsher1oodit is possible, there's no locking of cache keys. iirc i prioritised that the build would always succeed if possible over doing the fewest possible constuction steps08:44
* paulsher1ood may be wrong, it's a long time ago08:44
tristanHah !08:45
tristanOk this is damn weird, what failed in CI failed in my smoketesting; I expect it's a really stupid mistake too08:45
* Kinnison realises his proposal relies on the monotonicity of time and worries that might fail in a distributed setting08:45
tristanAh, lower threshold is 50%, and no tolerance, that is why it fails08:45
tristangot it08:45
qinustyhmmmm, what happens if you raised an exception in an asyncio coroutine?08:47
*** leopi has joined #buildstream08:52
tristantlater[m], progress on cache cleanup issue here: https://gitlab.com/BuildStream/buildstream/issues/623#note_99629594 :)08:52
toscalixgood morning tiagogomes08:54
tlater[m]Hmm, interesting08:54
*** tristan has quit IRC08:55
tlater[m]tristan: I'm pretty sure your suspicion is right08:56
tlater[m]Gah, stupid mistake. We shouldn't check if we can go down to 50% but whether we can get enough space...08:56
*** tristan has joined #buildstream08:57
tristanoops08:58
tristan<tristan> Kinnison, I'm not sure that it can matter for distributed settings - so long as a CAS resides on a given machine08:58
tristan<tristan> Kinnison, it *might* make a difference, but I think some analysis is needed to ensure it actually does; cleanups and persistence should probably happen on the same machine, even if workers and such have their own local copies of what they requested08:58
tristanAnyway, retyped messages which didnt make it with the network hiccup08:58
Kinnisontristan: Re: distributed I was thinking of a situation where you have a CAS shared among many computers via something like NFS, but if that's not a use-case you want to support then it's less of a problem, for sure.08:59
tristanOhhhh08:59
tristanKinnison, what with the whole RPC protocol and everything for working with CAS in a network environment; it does seem to me that it would be quite a hack to work around it all with NFS mounts09:01
tristanThen again, people on clouds do crazy things and expect it to perform09:01
KinnisonHeh09:01
KinnisonI think if you say "A disk CAS is meant to be used only by a single computer (single time-domain)" then if someone hacks around it, they get to pick up all the pieces09:01
*** kim has joined #buildstream09:01
tiagogomestoscalix ?09:05
toscalixwww does not seem to work on the domain09:06
*** kim has quit IRC09:06
tiagogomesit was never requested09:06
toscalixah, ok, I added it to the ticket09:07
toscalixcan we have it?09:07
tiagogomesyes09:11
*** jonathanmaw has joined #buildstream09:12
toscalixtiagogomes: thanks09:14
qinustytristan, Re https://gitlab.com/BuildStream/buildstream/merge_requests/765... Can we discuss the current logic for marking jobs as skipped and why it isn't ideal? (as you mention in the issue)09:16
tristanqinusty, yup... I'll refresh my memory...09:24
tristanqinusty, What specifically ? I thought we went over all of this last week right ?09:24
qinustyYeah, just formalising my thoughts in a response :D There are a few concerns I have regarding attempting to use the SkipJob exception within the main process.09:26
tristanqinusty, https://gitlab.com/BuildStream/buildstream/merge_requests/765#note_9964109709:27
tristanI just added an example there09:28
tristanWhich should help illustrate what I would expect09:28
tristanqinusty, That sounds like technical details; whenever we launch a plugin delegated operation, regardless of whether it is in the main process or in a task, it is in a try / except harness which should handle SkipJob right ?09:29
*** lachlan has joined #buildstream09:29
tristanqinusty, however; in an ideal world, we only run plugin delegated things via the scheduler (I know there are some edge cases where we currently do not, though)09:30
tristanqinusty, That said; SkipJob anyway does not become public API09:30
tristanqinusty, which means a plugin delegated operation cannot raise Skip, We raise Skip from the Queue implementations that run inside of a task, so ... it cannot even happen right ?09:31
tristanqinusty, for instance; I would not expect Skip to be raised from within ArtifactCache() code; I would expect it raised from the task operation in PullQueue / PushQueue, when it has determined that all cache servers traversed were futile09:32
tristanqinusty, The Push/Pull Queues need to determine that with well defined APIs of the ArtifactCache09:32
tristanqinusty, Does that make sense ?09:32
qinustypartial thoughts documented. My main concerns are that we cannot raise SkipJob to be caught by the scheduler simply because we are working from within a child process, and on return we are within an async callback (which afaict, doesn't have a nice way to propagate the exception to scheduler)09:34
tristanEh ?09:34
qinustyYeah it does. Currently the only use for SKIPPED is within the artifact cache, in my local implementation I have moved the exception raising to within element. However could move this to Queue potentially due to the logic in place for returning False on queue.process() for skipped09:35
tristanOk ok... so we are talking about removing the Queue subclass's ability to manually mark a job as skipped as a consequence of it having been skipped in processing09:35
tristanqinusty, I think Skip should only ever be raised in Queue implementations09:35
qinustyYes. Currently the Queue.process() returns True for processed, False for skipped. From what I understood, you thought this was not ideal09:35
tristanRight, that should be removed in favor of something that the Queue implementations don't need to think about09:36
tristanSince instead of using that API contract, they will now raise Skip in their Queue.process() implementation09:36
tristanqinusty, And you are incorrect about Queue.process()09:36
tristanqinusty, See the docs comment in queue.py09:37
tristanqinusty, That is something that PushQueue/PullQueue do entirely of their own volition09:37
qinustyAh okay, So currently Queue.done() is the issue?09:37
tristanqinusty, It is the Queue.done() API contract which needs to change; and the PushQueue/PullQueue implementations of Queue which need to be changed09:37
tristanCorrect09:38
qinustyDone needs to stay. But the return value for skipped does not.09:38
tristanqinusty, PushQueue/PullQueue use process() to report whatever they want through the return, they happen to report skipped through that09:38
tristanBut they dont need to do that anymore09:38
tristanAnd Queue.done() needs to lose it's ability of marking something as skipped, as that is no longer the contract09:39
tristanqinusty, It might even make sense to make Skip() an internal exception to the _scheduler/ module (or "package" in pythonic terms)09:39
tristanThat would remove the ability to have it handled in timed activities; but that still makes sense right ?09:40
qinustySounds reasonable. I was approaching this from the other side. This makes much more sense.09:40
tristanOnly a task can be SKIPPED09:40
tristanAnd that fits with how we record skipped tasks in the summary and status area09:40
qinustySo within a Push task, you have a Push timed activity. (that's just how it is)09:40
qinustythe activity would SUCCESS09:40
qinustyand the task would SKIPPED?09:40
tristanqinusty, That is not just how it is; afaics you added that09:41
qinustyI added that to pull09:41
qinustyone of them was timed_activity, the other was not09:41
qinustyI added for consistency09:41
tristanI see09:41
tristanRight; and https://gitlab.com/BuildStream/buildstream/merge_requests/765#note_99641097 politely asks for this to be removed in both09:41
tristansince it's redundant information in the log09:41
tristanqinusty, I think the ability to have any random `timed_activity()` show SKIPPED is orthogonal to the rest, but it seems to me desirable to avoid it09:43
tristan(I do think it should be a separate conversation, though)09:43
qinustyOkay, looking at that log. We have... Start of task, status, info, skipped task09:44
qinustywhere's the timed_Activity there?09:44
tristanThere is none09:44
qinustyIt's been in the code for 2 months09:44
tristanqinusty, remember that that is not where we are going to be issuing the SKIPPED message09:44
qinustyI know. I'm trying to understand what is in the code, and what is in the log09:44
tristanhttps://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_scheduler/jobs/job.py#L393 fires the SKIPPED message for the task (or that's the block)09:45
tristanqinusty, ok in that case I don't quite understand the question09:46
qinustyEither way, I'm just confusing things here. I'll work to what we've discussed and try and explain this point once I've thought about it more.09:47
tristanThe rationale of those 4 messages is basically: Start / Skipped are the timed task... STATUS is because this is actually only relevant in verbose mode (which is currently ON by default)... INFO is what the task wants to report as interesting information/result09:48
tristanSTATUS is like "Now I'm doing this...", INFO is like "This happened which I think you really care about knowing"09:49
qinustyMakes sense, but https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/element.py#L1789 is my point09:49
qinustyWhere is that.09:49
tristanqinusty, That is not.09:50
tristanbasically, that goes away entirely09:50
tristanit's absolutely useless09:50
qinustyBut it doesn't even print into the log.09:50
qinustyThat is master. not my branch09:50
tristanI know, it needs to go away is what I'm repeating09:50
qinustyOkay, are the messages suppressed within the log?09:51
tristanI don't understand09:51
qinustyThey don't print09:51
tristanAhhh09:51
tristanqinusty, That can be a side effect of silent_nested09:51
tristanDepending on where it's being called from09:51
*** iker has quit IRC09:51
tristanqinusty, I'm failing to understand the context of your question: You are looking at code that exists and is not doing what you expect it to be doing09:52
qinustyYes.09:52
tristanqinusty, that's why I'm having a different conversation :)09:52
tristanqinusty, That will be silent nested at work09:52
qinustyI'm trying to further understand the components of the larger problem :D09:52
qinustyWhich comes down to silent nested, as you mention09:52
qinustyOkay, I'll strip that out as part of my rework09:53
*** alatiera_ has joined #buildstream09:53
tristanThe "silent_nested" originally is there because: A code segment might want to time it's activity... But it might run in the context of something which is already timing it09:53
tristanIn a lot of these cases, the messaging becomes explosive, or at least it originally did09:53
qinustyOn a side note, completely separate argument. Is there a reason we have a task START messages show the path to the log file instead of a descriptive message?09:54
tristanqinusty, we had this conversation last week *also*09:54
qinustyWe did, I did not understand/agree. I feel like it is just confusing to a log reader09:55
qinustythey're obviously /necessary/, but replacing the START message?09:55
tristanqinusty, OK - So you have a task; a task is described already by the [ activity:element.bst ] portion of the log line09:55
tristanThere are not all that many kinds of task09:55
tristanPush/Pull/Build/Fetch/Track09:56
qinustyyup09:56
tristanThose are relevant in every line09:56
tristanNow it's also important to put the log line *somewhere*09:56
tristanAnd it's important to not use too much space, we dont want a newline for every START with a log file underneath, just because we wanted to print some redundant text09:57
tristanWhat would the task print there asides from "Starting Build" ?09:57
tristanOr "Building"09:57
tristanOr "Tracking", or "Pushing" ?09:57
*** iker has joined #buildstream09:57
qinustyMakes sense, perhaps I'm just reading the logs wrong. I primarily read the right hand side. From what you're saying, to find the the information I'm looking for I should be looking left09:58
tristanqinusty, in the case of the timed activity in question, for push pull; this redundant message was saying "START Pushing deadbeef"09:58
tristanBut we already have "Push" and we already have "deadbeef" and we already have "START"09:58
tristanin that line09:58
tristanWe *still* need a place to put the log; so since there is nothing really to add to that task line; we put it in the "custom message text" area on the right09:59
tristan(and nicely make it logdir relative, avoiding too much column width)10:00
ikerHI. Happened to me that I installed BuildStream yesterday and It was working perfectly, this morning I installed bst-external and since then the bst command is not recognized by bash. Does this happened to someone else?10:00
tristanOh that's new to me :-S10:00
tristanjonathanmaw, any idea about that ? ^^^^^10:00
qinustyWhere is your buildstream installed? Is that part of your PATH?10:00
tristanqinusty, seems it was already working, though10:00
qinustyin a shell, maybe a new shell?10:01
jonathanmawiker: I haven't seen that before, either10:01
tristanBut yeah that is a good guess :)10:01
tristanqinusty, :)10:01
ikerI adjusted the PATH yesterday as described in the BuildStream wiki. Should that be done everyday?10:01
tristaniker, I think the BuildStream docs say to put it in .bashrc or .bash_profile no ?10:01
qinustyPATH should be adjusted in a permanent manner. e.g. ^10:01
tristanhttp://buildstream.gitlab.io/buildstream/install_source.html#adjust-path10:02
tristaniker, anyway just add it to the end of your .bashrc and you should be set :)10:02
ikeroh yes. I did it in the terminal just to try BuildStream and then I forgot to do it in .bashrc...10:03
tristaniker, interestingly; I have this in my .bashrc for more reasons than just BuildStream (for anything I have installed there)10:03
ikerthanks10:03
tristanwelcome :)10:03
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72610:03
qinustyI assume this issue doesn't pop up with pip install tristan?10:04
tristanqinusty, It depends if you install --user or not10:07
tristanqinusty, I think many people install with `--user`, and are distrustful of pip going and writing to your system paths10:08
tristanIt'll never happen with distro packages, and you get man pages and bash completions for free10:08
tristanOr, you *should*, bash completions *might* have issues, but it should work out of the box10:08
qinustyNot with fish :(10:09
* qinusty wishes he was familiar enough with fish to write the completions for buildstream10:09
tristanright, but those would be fish completions !10:09
tristanhehe10:09
tristanqinusty, We should be able to reuse all the same code; someone who knows it well should be able to translate10:09
qinustyindeed10:09
tristanMaybe even a quick google will tell you how to integrate *any* bash completions into fish10:10
tristanmaybe a script exists for it somewhere10:10
*** lachlan has quit IRC10:10
tristanah... /me has to run :-S10:11
*** lachlan has joined #buildstream10:12
tristanI will fix the remaining issues with https://gitlab.com/BuildStream/buildstream/issues/623#note_99629594 this weekend, now that I'm reproducing10:12
tristanWill have to fix the server side of this too :-S10:13
tristantlater[m], juergbi; regarding performance of the pruning... I think that what we are doing is removing/pruning one artifact at a time10:13
tristanjuergbi, do you think it could be easily possible to have a loop which removes a bunch of refs, predicting how much space it can free; before entering prune ?10:14
tlater[m]The prediction part is what I struggled with :|10:14
tristanjuergbi, or rather, while removing refs; collecting some state which we can later feed to prune() so that half of it's intense job is already done, and only the objects need removing ?10:14
tristantlater[m], Maybe as a stop gap; we can do something ugly like 5 artifacts at a time10:15
juergbithe prediction is indeed the tricky part10:15
juergbihowever, with suitable metadata about artifact size, we might be able to estimate it10:15
juergbiwe can't estimate the deduplication but it should still allow big reduction in work10:16
*** lachlan has quit IRC10:16
tlater[m]Well, we'll be overestimating the amount we remove10:16
juergbiyes10:16
tlater[m]I suppose that would at least give us better batching10:16
tlater[m]Yeah, that could work10:17
tristanjuergbi, maybe we could give CAS->prune() (or whatever it is)... A long list of artifacts, and a desired limit of space to make ?10:17
tristanLike, remove as many of these artifacts (in an ordered list), until you make enough space ? I dont know, maybe it amounts to the same10:17
tristanmeh10:17
*** alatiera_ has quit IRC10:17
tlater[m]Hm, I wonder how expensive reading artifact metadata is10:18
*** alatiera_ has joined #buildstream10:18
tlater[m]Since you need to extract first10:18
tlater[m]Perhaps we could have some form of manifest that also lists the approximate size?10:18
tristantlater[m], indeed it's not ideal to remove 5 at a time, but seeing as it took me 30 blocking minutes to clear up ~8G, I think 5 artifacts might be conservative10:18
juergbitlater[m]: strictly speaking we don't need extract directories for it10:18
juergbi(in the future i hope we can get completely rid of extract directories)10:19
tristantlater[m], and in any case; the size of an artifact has little bearing on how much can be removed from the store10:19
tlater[m]tristan: juergbi suggests overestimating the size of artifacts we remove, and using that to go all the way down. We can then recalculate the size and check if we've reached our target10:19
tristanjuergbi, I agree; however it's difficult to do while preserving the touchy logic of staging; especially with regards to how we fail (or not) with symlinks10:19
tristantlater[m], that doesnt sound bad either10:20
juergbiwith the future i meant when we use buildbox10:20
tristanAnyway, I'll start with fixing the errors, optimize after10:20
juergbi(and have optimized CAS virtual directories)10:20
*** alatiera_ has quit IRC10:22
*** tristan has quit IRC10:23
Kinnisonmablanch: I've kicked off a test run10:32
mablanchRunning the test suite over the 'jmac/remote_execution_client' branch works on my local machine but fails when executed by the CI (most of the time at a random point with a core dump...). Anyone has already seen that / has an idea why this would happend?10:33
mablanchKinnison, thanks!10:33
*** lachlan has joined #buildstream10:35
*** alatiera_ has joined #buildstream10:35
Kinnisonmablanch: Hmm, I think I have a test suite hang10:36
Kinnisonmablanch: there's one process and it's blocked in a futex10:37
mablanchKinnison, which test is hanging?10:37
Kinnisontests/artifactcache/push.py::test_push10:37
* Kinnison tries to work out if you can get Python to dump a stack trace wherver it's at, perhaps on a signal10:39
mablanchKinnison, Does it hang if you run: python setup.py test --addopts="tests/artifactcache"10:39
*** ikerperez has joined #buildstream10:41
*** lachlan has quit IRC10:42
*** iker has quit IRC10:42
*** sstriker has joined #buildstream10:43
sstrikerRandom question: what determines the parallelism of the jobs in the build queue?10:43
phildawsonKinnison, perhaps faulthandler is what you are after? (https://docs.python.org/3/library/faulthandler.html)10:43
Kinnisonphildawson: I found that if I install the python debug symbols, I can run 'py-bt' in gdb10:44
Kinnison:-D10:44
phildawson:)10:44
Kinnisonseems to be dying in:10:44
Kinnison  File "/home/danielsilverstone/buildstream/buildstream/_platform/linux.py", line 68, in _check_user_ns_available10:44
Kinnison(or rather, deeper in the execute child stuff under there)10:45
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72610:45
Kinnison  <built-in method fork_exec of module object at remote 0x7f972d0b7868>10:45
Kinnison  File "/usr/lib/python3.5/subprocess.py", line 1221, in _execute_child10:45
Kinnisonspecifically10:45
Kinnisonmablanch: that made it some of the way through and then managed to abort \o/10:47
* Kinnison tries again with a debugger10:47
Kinnisonooooh10:48
Kinnisonfound something10:48
Kinnisonthe abort happens deep in the grpc core10:48
Kinnisondoing some mutex frobbing10:48
Kinnisonmablanch: C backtrace https://pastebin.com/zScwbgnY10:49
Kinnisonmablanch: Python backtrace looks similar (in the execute_child stuff under check_user_ns_available)10:49
Kinnisonmablanch: could this be related to however you clean up the grpc stuff between tests?10:50
Kinnisonsstriker: There's a number of "workers" permitted, (various kinds thereof, but I imagine build workers are what you're interested in)10:50
sstrikerIgnore my earlier question; found my answer.  Resources in scheduler.10:50
* Kinnison stops10:50
Kinnison:-)10:50
gitlab-br-botbuildstream: issue #633 ("Remote Execution should maximize parallel builders") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/63311:00
Kinnisonmablanch: sadly that forced gc.collect() didn't stop the aborts11:01
sstriker@Kinnison: that issue probably was no surprise after that question :)11:01
*** lachlan has joined #buildstream11:01
* Kinnison worries about what sstriker might have sent his way11:01
* sstriker mumbles... how do I @ mention in irc again?11:02
Kinnisonsstriker: You have problems in IRC, I have problems in slack :-D11:06
Kinnisonmablanch: We may have a serious blocker problem here11:06
Kinnisonmablanch: consider https://github.com/grpc/grpc/issues/13873#issuecomment-39032674111:06
KinnisonSpecifically: right now, gRPC only allows fork() iff the parent process has not yet initiated a gRPC connection (full documentation on the present state of things at https://github.com/grpc/grpc/blob/master/doc/fork_support.md).11:06
mablanchKinnison: Ouch...11:07
KinnisonThe content of https://github.com/grpc/grpc/blob/master/doc/fork_support.md does *not* fill me with joy11:08
sstrikerKinnison: sadness. Begs the question whether python is fit for purpose here (I'm assuming you are referring to #buildgrid context here, not buildstream side of things?)11:10
Kinnisonmablanch: I'm going to grab some lunch, then we should discuss the impact of this on the possibility of CAS related stuff11:10
Kinnisonsstriker: No, mablanch and I are trying to work through !62611:10
Kinnisonsstriker: BuildStream uses gRPC to talk between CASs11:11
sstrikerGot it.11:11
Kinnisonmablanch: I intend to have some lunch, then we should discuss the impact of this if you've not found a workaround by then.11:12
mablanchKinnison, Ok11:12
sstrikerI'll leave it to you.  I'm sure there will be a solution :)11:12
*** lachlan has quit IRC11:16
*** sstriker has quit IRC11:18
*** abderrahim has quit IRC11:18
*** abderrahim has joined #buildstream11:19
gitlab-br-botbuildstream: merge request (tiagogomes/issue-287->master: Add validation of configuration variables) #678 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/67811:21
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72611:21
gitlab-br-botbuildstream: merge request (tiagogomes/issue-573->master: Reduce IO overhead caused by artifact cache size calculation) #671 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/67111:24
*** lachlan has joined #buildstream11:30
alatiera_is there a bst plugin for vim by any chance?11:30
tiagogomesNo11:40
tlater[m]alatiera_: We'd definitely appreciate someone writing one11:47
tlater[m]Syntax highlighting/completion on .bst files would be awesome.11:48
tlater[m](By syntax highlighting I mean coloring variables correctly, which yaml highlighting doesn't)11:49
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72611:56
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72612:01
mablanchKinnison: I'm updating the tests in order to have all the gRPC calls being made in subprocesses. Will update the branch and ping you for a new test when ready.12:05
Kinnisonmablanch: excellent12:05
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72612:07
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72612:11
*** lachlan has quit IRC12:12
alatiera_tlater[m], heh, yea. I spent half an hour chasing a trailing whitespace in some meson-local:12:17
alatiera_apparently my vim finds something else to highlight given the .bst12:17
alatiera_setting the syntax to yaml manually works though12:17
tlater[m]Huh, that's interesting. Any clue what that something is?12:25
*** dtf has joined #buildstream12:26
qinustyreviewed toscalix :D12:37
*** finn has quit IRC12:47
*** pro[m] has quit IRC12:47
*** tlater[m] has quit IRC12:47
*** awacheux[m] has quit IRC12:47
*** abderrahim[m] has quit IRC12:47
*** ssssam[m] has quit IRC12:47
*** mattiasb has quit IRC12:47
*** segfault3[m] has quit IRC12:47
*** asingh_[m] has quit IRC12:47
*** kailueke[m] has quit IRC12:47
*** theawless[m] has quit IRC12:47
*** albfan[m] has quit IRC12:47
*** inigomartinez has quit IRC12:47
*** oknf[m] has quit IRC12:47
*** m_22[m] has quit IRC12:47
*** WSalmon has quit IRC12:47
*** lchlan has quit IRC12:47
*** Kinnison has quit IRC12:47
*** paulsher1ood has quit IRC12:47
*** laurence has quit IRC12:47
*** ironfoot has quit IRC12:47
*** flatmush has quit IRC12:47
*** tintou has quit IRC12:47
*** csoriano has quit IRC12:47
*** jjardon has quit IRC12:47
*** Nexus has quit IRC12:47
*** mablanch has quit IRC12:47
*** abderrahim has quit IRC12:47
*** ikerperez has quit IRC12:47
*** alatiera_ has quit IRC12:47
*** jonathanmaw has quit IRC12:47
*** leopi has quit IRC12:47
*** toscalix has quit IRC12:47
*** phildawson has quit IRC12:47
*** rdale has quit IRC12:47
*** tiagogomes has quit IRC12:47
*** SotK has quit IRC12:47
*** gitlab-br-bot has quit IRC12:47
*** adds68 has quit IRC12:47
*** tiagogomes_ has quit IRC12:47
*** coldtom has quit IRC12:47
*** jennis has quit IRC12:47
*** qinusty has quit IRC12:47
*** lantw44 has quit IRC12:47
*** Trevinho has quit IRC12:47
*** abderrahim has joined #buildstream12:48
*** ikerperez has joined #buildstream12:48
*** alatiera_ has joined #buildstream12:48
*** jonathanmaw has joined #buildstream12:48
*** leopi has joined #buildstream12:48
*** toscalix has joined #buildstream12:48
*** finn has joined #buildstream12:48
*** phildawson has joined #buildstream12:48
*** rdale has joined #buildstream12:48
*** tiagogomes has joined #buildstream12:48
*** oknf[m] has joined #buildstream12:48
*** albfan[m] has joined #buildstream12:48
*** pro[m] has joined #buildstream12:48
*** segfault3[m] has joined #buildstream12:48
*** kailueke[m] has joined #buildstream12:48
*** mattiasb has joined #buildstream12:48
*** inigomartinez has joined #buildstream12:48
*** asingh_[m] has joined #buildstream12:48
*** abderrahim[m] has joined #buildstream12:48
*** ssssam[m] has joined #buildstream12:48
*** tlater[m] has joined #buildstream12:48
*** theawless[m] has joined #buildstream12:48
*** awacheux[m] has joined #buildstream12:48
*** SotK has joined #buildstream12:48
*** m_22[m] has joined #buildstream12:48
*** WSalmon has joined #buildstream12:48
*** gitlab-br-bot has joined #buildstream12:48
*** adds68 has joined #buildstream12:48
*** tiagogomes_ has joined #buildstream12:48
*** coldtom has joined #buildstream12:48
*** jennis has joined #buildstream12:48
*** qinusty has joined #buildstream12:48
*** lchlan has joined #buildstream12:48
*** Kinnison has joined #buildstream12:48
*** paulsher1ood has joined #buildstream12:48
*** laurence has joined #buildstream12:48
*** lantw44 has joined #buildstream12:48
*** ironfoot has joined #buildstream12:48
*** flatmush has joined #buildstream12:48
*** Trevinho has joined #buildstream12:48
*** tintou has joined #buildstream12:48
*** csoriano has joined #buildstream12:48
*** jjardon has joined #buildstream12:48
*** Nexus has joined #buildstream12:48
*** mablanch has joined #buildstream12:48
*** irc.acc.umu.se sets mode: +oo ironfoot jjardon12:48
Kinnisonmablanch:12:50
mablanchKinnison, CI seems to prefer it now.12:50
Kinnison======================================================================= 35 passed in 26.05 seconds =======================================================================12:50
Kinnison[Inferior 1 (process 524) exited normally]12:50
Kinnisoneven under gdb with mutex debug turned to max12:50
mablanchKinnison, Ok12:51
* Kinnison is excited that this might mean we're closer to merge12:51
Kinnisonmablanch: Has anyone other than juergbi and myself looked over !626 yet?12:51
mablanchKinnison: Thank for the help btw!12:51
mablanchKinnison, sstriker did.12:52
Kinnisonqinusty: If you're not too busy, more eyes on !626 in Buildstream would be cool12:52
Kinnisonmablanch: aah yes, excellent12:52
mablanchKinnison, qinusty: I'll push another update, the unit-test code may benefit from factorisation...12:53
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72612:57
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72613:02
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72613:05
*** lachlan has joined #buildstream13:14
gitlab-br-botbuildstream: merge request (mac_fixes->master: WIP: Resolve "os.sched_getaffinity() not supported on MacOSX Blocks #411") #726 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/72613:19
gitlab-br-botbuildstream: merge request (jmac/remote_execution_client->master: WIP: Remote execution client) #626 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/62613:24
mablanchqinusty: Tests should be in a better shape now if you want to have a look.13:24
qinustyStill seeing issues mablanch?13:25
mablanchqinusty, Nope!13:26
qinustyNice! Ping me an MR if you need a review13:26
mablanchqinusty, Oh yes, sorry, here it is: https://gitlab.com/BuildStream/buildstream/merge_requests/626/13:27
gitlab-br-botbuildstream: merge request (tpollard/494->master: WIP: Don't pull artifact buildtrees by default) #786 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/78613:32
qinustyI do like the addition of unit tests, glad you got the issues fixed mablanch :D I can't review this entire MR right now but the unit tests look like a good start to me.13:34
qinustyWhat ended up being the cause to the hang?13:35
gitlab-br-botbuildstream: issue #634 ("bug in build after failed build with workspace's") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/63413:36
mablanchqinusty: Mostly this: https://github.com/grpc/grpc/blob/master/doc/fork_support.md13:39
*** lachlan has quit IRC13:43
gitlab-br-botbuildstream: merge request (tiagogomes/issue-287->master: Add validation of configuration variables) #678 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/67813:53
NexusDoes this make sense to anyone?14:13
NexusBUG: Message handling out of sync, unable to retrieve failure message for element import element14:13
qinustymagic14:14
qinustyAre you tweaking things, or is this just happening?14:14
Nexusjust happening14:19
Nexuson a mac tbf14:19
*** lachlan has joined #buildstream14:40
*** ikerperez has quit IRC14:45
qinustyThere was something in the code which indicated this Nexus14:46
* qinusty searches14:46
Kinnisonmablanch: so there's one unresolved discussion left on !626 -- how close to asking for merge are we?14:47
qinustyhttps://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_frontend/app.py#L536 Nexus, tlater[m] may know14:47
Nexusso it didn't fail, it didn't succeed, and it wasn't terminated...14:49
Nexusi'm glad that buildstream has about as much a clue about this as i do...14:49
mablanchKinnison, Never been so close. juergbi will you have time to have a look at that comment today?14:49
*** lachlan has quit IRC14:49
mablanchI think qinusty is having a look at the unit-tests part of the MR also.14:49
* qinusty took a few glances, but is currently working on an MR14:50
*** iker has joined #buildstream14:53
tlater[m]Nexus: A lot may cause that14:54
tlater[m]Essentially, your job finished, the process said it failed, but it didn't talk to the main process at all.14:55
tlater[m]It indicates some sort of a crash in a child process.14:55
Nexustlater[m]: well i'm having a fun little bug atm, where i can do `bst workspace list` anywhere (not in a bst project) and it will return `workspaces: []`14:55
tlater[m]That is entirely unrelated, the above can only happen during a build14:56
tlater[m]Nexus: Feel free to raise an issue about the workspace thing, I figure the expectation is to see "not in a buildstream project".14:57
Nexusit asks you to make a project14:57
Nexusthis only happens on MacOS14:57
Nexusadaik14:57
Nexusafaik*14:58
tlater[m]And you made the project?14:58
tlater[m]Anyway, that issue seems like it's fairly easy to fix. Do you have anything else on the crash when you're building?14:58
Nexushttps://pastebin.com/Xq5XeLhg14:59
Nexushastebin never seems to work for me anymore :(15:00
tiagogomes`bst workspace list` doesn't return an empty list to me from anywhere15:00
Nexustiagogomes: did you try running it on a mac? :)15:00
Nexustiagogomes: also it returns an empty list if you have no workspaces15:00
Nexusi raised that in a UX post to the mailing list a while ago i think15:01
tiagogomesI did not run in on a mac15:01
tpollardyou did Nexus15:01
Nexusgood, or i'd be confused :) i don't recommend running bst on a mac just yet15:01
Nexuscoolio15:01
tlater[m]Nexus: Yeah, this will be a pain to debug... I wonder what causes that stacktrace on os.mkdirs15:02
tlater[m]That's probably the first thing to investigate15:02
tlater[m]The bug message by itself is not very useful in determining what breaks, unless you're mucking with the scheduler itself.15:03
Nexusnope15:03
gitlab-br-botbuildstream: issue #636 ("Pytest 3.8.0 warnings from datafiles") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/63615:05
tpollardI thought we'd pinned pytest? or was that just in ci15:08
qinustyIts pinned to be >15:22
*** lachlan has joined #buildstream15:23
qinustyThe CI is just docker images, so it's whatever was installed when we last built the docker image for CI tpollard15:23
*** WSalmon has quit IRC15:27
*** lachlan has quit IRC15:28
tpollardI've found the commit I was thinking off, but it's not a hard pin anyway15:28
*** WSalmon has joined #buildstream15:29
toscalixjonathanmaw: good paragraph, thanks15:29
ikerWhen building a project in buildstream I get the next error: Build failure on element: gnu-toolchain/stage1-gcc.bst15:29
toscalixqinusty: thanks for your review. You have some more content15:29
toscalixto review15:30
ikerIn the logs can be read: https://paste.codethink.co.uk/?493215:30
ikerthis error doesn't happen to other host machines15:30
ikeris it possible that I have skipped some steps?15:31
Kinnisoniker: that pastebin doesn't work for us.  Could you please use pastebin.com or similar?15:31
qinustyIt's best not to use codethink pastes in here iker :D15:31
ikero sorry15:31
ikerI didn't know that15:31
qinustypaste.gnome.org works15:31
WSalmonhttps://hastebin.com/15:31
qinustyI like hastebin, gnome just feels more welcome here :D15:32
qinustygnome paste never lets me paste as Text though D:<15:32
ikerhttps://paste.gnome.org/pbvrdlwif15:32
qinustyseems like an issue while interacting with some git internals, perhaps skullman may be of assistance15:35
skullmanI'm not hugely familiar with the submodule handling logic, that error suggests that it's expecting the commit to have submodules and is surprised when it doesn't.15:38
skullmanfrom what I can see from the code, that shouldn't be a fatal error15:41
skullmanwhat does the rest of the log say?15:41
*** finn has quit IRC15:42
*** finn has joined #buildstream15:42
*** rauchen has joined #buildstream15:44
ikerskullman, https://paste.gnome.org/pqs4eqhbx15:47
skullmanI can't see anything weird with that, has it been truncated?15:48
gitlab-br-botbuildstream: merge request (willsalmon/outOfSourecBuild->master: WIP: out of source builds) #776 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/77615:49
ikerIt has stopped by itself. Know asks me  decide what to do15:49
ikerBuild failure on element: gnu-toolchain/stage1-gcc.bst15:49
skullmanok, but your log snippets don't show the error15:50
toscalixqinusty: I received a notific. mail with a couple of comments from you related with my last MR. But I cannot see them in the gitlab interface even if I click the link that comes in the notification15:59
qinustyhm. Weird15:59
ikerI am building it again, I will send you the logs when the error appears again16:00
toscalixqinusty: np, I will use the mail, but I really cannot find it16:01
tiagogomeshttps://www.buildstream.build/ works now16:02
qinustysent it again toscalix :D16:03
toscalixgrrrr, now I have a conflict16:04
qinustyI think it's important to avoid the idea of a 'new structure' as everything is backwards compatible. We simply add more customisation options to the configuration file.16:04
toscalixI agree with your comments16:05
toscalixit is simply that I cannot solve my conflict. Hold on a sec16:05
toscalixdo you mind to approve this MR that includes many changes and I create a new one right away addressing your two comments?16:07
qinustyWhere has the conflict come from? Master hasn't updated any files you've been working on for a while16:07
toscalixqinusty: I cannot tell. There should be none16:07
qinustyWhat does git status say?16:07
toscalixI made a mistake somewhere16:07
toscalixMR with many commits.... I am not that good16:08
qinustyapproved16:09
toscalixthanks. I cannot solve the conflict. I will rebase and create a new MR16:11
toscalixit should be an easy one16:11
toscalixbut do not know why I cannot16:12
*** rauchen has quit IRC16:12
toscalixah, there is a test for checking links. Does it check undefined link and 404?16:18
toscalixundefined=no link at all16:18
gitlab-br-botbuildstream: merge request (Qinusty/skipped-rework->master: Add SkipError for indicating a skipped activity) #765 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/76516:25
qinustyIt just runs a standard crawler on the page to ensure all links lead to valid pages e.g. not 404, 50016:29
qinustyWhat does an undefined page look like?16:29
toscalixqinusty: I mean that you define a [link] but then forget to add a URL16:33
qinustyNot much we can do about that. That's a disadvantage to linking at the bottom of the page :D16:34
qinustyWe could write a script to do this16:34
qinustythe problem is toscalix, that you can't identify the difference between [text] which is missing a link, and an intended '[text]'16:37
*** tpollard has quit IRC16:39
ikerskullman, https://paste.gnome.org/p12yiwazz16:43
Kinnisonmablanch: Given juergbi hasn't complained, I think the remaining discussion on 626 is resolved.  Since noone else has had anything worrisome to say, I suggest we go ahead and merge if the pipeline is clean16:43
gitlab-br-botbuildstream: merge request (jmac/remote_execution_client->master: Remote execution client) #626 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/62616:43
qinustyiker, I refer you to flatmush who had this issue the other week. Maybe he can be of some assistance. It either comes from symlink issues or a recursive variable definition in your project.16:44
mablanchKinnison: The pipeline is ok. It needs at least 1 approval and a maintainer to be merge.16:45
qinustyIt only needs a merge mablanch, no approval is necessary currently in buildstream. Developers with granted access can merge.16:46
ikerqinusty, I talked with him before but he didn't know about it16:46
ikerthanks anyway. I will try to fix it in Monday16:46
ikerhave a good weekend16:47
toscalixqinusty: no problem16:47
mablanchqinusty, Oh ok, then is needs someone with that kind of access then :)16:47
toscalixqinusty: https://gitlab.com/BuildStream/website/merge_requests/7116:47
*** iker has quit IRC16:47
gitlab-br-botbuildstream: merge request (jmac/remote_execution_client->master: Remote execution client) #626 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/62616:48
mablanchCheers qinusty and Kinnison!16:49
Kinnisoncongrats on that epic effort mablanch16:49
qinustyGet tristan or toscalix to bump your privileges for merging to BuildStream. Usually they're given after a patch or two. Things to note when merging. Always remove source branch, this is either a tickbox next to merge or on MR creation16:50
mablanchKinnison: Ah ah, thank you, juergbi, qinusty and you have been of great help!16:50
toscalixjuergbi: around?16:51
tlater[m]qinusty: ooi, do you not have permissions to give people permissions?16:52
qinustyOnly maintainers I believe16:52
qinustySince we changed to Developer with extra perms16:52
qinustyAs opposed to maintainers who could grant maintainer16:52
tlater[m]Fair enough. I wonder if you should get those permissions given you keep pointing new developers.16:53
*** lachlan has joined #buildstream16:54
toscalixI have permissions but I just use them to configs related with the project management side of the project.16:54
toscalixI do not mess around with areas where I have little or no expertise16:55
toscalixto merge you need the be Maintainer16:55
toscalixwe should have one person who is usually available on Friday afternoons since tristan is not16:56
jjardontoscalix: no, there is a list of people that can merge16:57
jjardonno need to be a maintainer16:57
toscalixjonathanmaw: is maintainer. Looking for that list17:00
*** lachlan has quit IRC17:02
toscalixjjardon: I cannot find that list. Do you know where is at?17:03
*** alatiera_ has quit IRC17:06
*** alatiera_ has joined #buildstream17:06
toscalixtlater[m]: is maintainer too17:07
toscalixI guess we have enough people to cover fri afternoon17:07
tlater[m]Yes, that's not really been a problem.17:08
toscalixfine then17:08
gitlab-br-botbuildstream: merge request (jonathan/pickle-yaml->master: WIP: Add a cache of parsed and provenanced yaml) #787 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/78717:09
*** alatiera_ has quit IRC17:10
*** alatiera_ has joined #buildstream17:10
*** toscalix has quit IRC17:13
*** alatiera_ has quit IRC17:15
*** alatiera_ has joined #buildstream17:15
jjardontoscalix is in the merge_request configuration in the buildstream project17:17
*** jonathanmaw has quit IRC17:27
*** lachlan has joined #buildstream17:49
*** lachlan has quit IRC17:53
*** lachlan has joined #buildstream18:16
*** lachlan has quit IRC18:21
*** phildawson has quit IRC18:27
*** lachlan has joined #buildstream18:43
*** lachlan has quit IRC18:57
*** leopi has quit IRC19:11
*** lachlan has joined #buildstream19:18
*** lachlan has quit IRC19:22
*** alatiera_ has quit IRC20:22
*** alatiera_ has joined #buildstream20:22
*** alatiera_ has quit IRC20:34
*** alatiera_ has joined #buildstream20:35
*** mohan43u has quit IRC20:39
*** mohan43u has joined #buildstream20:41
*** cs-shadow has quit IRC21:23
*** tristan has joined #buildstream21:34
*** alatiera_ has quit IRC22:35
*** mohan43u has quit IRC23:18
*** mohan43u has joined #buildstream23:19

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!