*** tristan_ has joined #buildstream | 00:09 | |
*** tristan has quit IRC | 00:09 | |
*** tristan_ has quit IRC | 02:06 | |
*** tristan_ has joined #buildstream | 03:12 | |
*** slaf has quit IRC | 05:12 | |
*** slaf has joined #buildstream | 05:14 | |
*** slaf has joined #buildstream | 05:14 | |
*** slaf has joined #buildstream | 05:15 | |
*** slaf has joined #buildstream | 05:15 | |
*** slaf has joined #buildstream | 05:15 | |
*** slaf has joined #buildstream | 05:15 | |
*** slaf has joined #buildstream | 05:16 | |
*** slaf has joined #buildstream | 05:16 | |
*** Prince781 has quit IRC | 05:32 | |
*** Prince781 has joined #buildstream | 05:42 | |
*** bochecha_ has joined #buildstream | 06:11 | |
gitlab-br-bot | buildstream: merge request (tristan/docs-integration-commands->master: docs: Adding section on integration commands) #517 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/517 | 06:34 |
---|---|---|
gitlab-br-bot | buildstream: merge request (tristan/docs-integration-commands->master: docs: Adding section on integration commands) #517 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/517 | 06:38 |
*** bochecha_ has quit IRC | 06:56 | |
*** bochecha_ has joined #buildstream | 06:56 | |
*** Prince781 has quit IRC | 06:57 | |
*** bochecha_ has quit IRC | 07:01 | |
gitlab-br-bot | buildstream: merge request (tristan/docs-integration-commands->master: docs: Adding section on integration commands) #517 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/517 | 07:05 |
*** bochecha_ has joined #buildstream | 07:15 | |
gitlab-br-bot | buildstream: merge request (tristan/docs-integration-commands->master: docs: Adding section on integration commands) #517 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/517 | 07:37 |
*** coldtom has joined #buildstream | 07:54 | |
*** Phil has joined #buildstream | 07:57 | |
*** tristan_ has quit IRC | 08:36 | |
*** jonathanmaw has joined #buildstream | 08:38 | |
*** tiagogomes has joined #buildstream | 08:53 | |
*** bethw has joined #buildstream | 08:53 | |
*** dominic has joined #buildstream | 08:56 | |
*** tiagogomes has quit IRC | 08:57 | |
*** tiago has joined #buildstream | 08:59 | |
noisecell | good morning | 09:21 |
noisecell | when doing a "bst checkout" it seems that the artifact elements are copy to the directory selected, have we though to use hard links instead of a copy to waste developer space? e.g. : https://paste.baserock.org/zeguleduge | 09:23 |
noisecell | s/to waste developer space/to avoid use a lot of the developer disk space/ | 09:24 |
tlater | noisecell: `bst checkout` has a `--hard-links` option (or suchlike), which you can use instead | 09:32 |
tlater | Keep in mind that this is linked directly to the artifact, so use it with care | 09:33 |
noisecell | tlater, oh, my bad. Thank you for pointing me to this :) | 09:35 |
*** ernestask has joined #buildstream | 09:56 | |
*** aday has joined #buildstream | 10:14 | |
laurence | ok, we have a mailing list for buildgrid | 10:16 |
laurence | jmac, finn, making us the 3 admins for now | 10:16 |
laurence | if that's ok | 10:16 |
finn | thanks | 10:26 |
laurence | have you recieevd the admin mail | 10:26 |
laurence | ? | 10:26 |
jmac | Nothing yet | 10:27 |
finn | not yet | 10:27 |
laurence | forwarded | 10:28 |
tlater | Can we get a link to this mailing list? Perhaps you should email to the buildstream ML about this, considering we had an email stating that it would also be used for buildgrid a few weeks ago. | 10:31 |
laurence | tlater, that's the plan | 10:37 |
laurence | tlater, which mail do you refer to, ooi? | 10:37 |
laurence | i don't recall that specifically, but could be my memory | 10:37 |
tlater | Hm, I can't find it right now, perhaps my memory is off... | 10:39 |
*** cs_shadow has joined #buildstream | 10:46 | |
tlater | Is there a way to "block" an issue on gitlab? | 10:56 |
tlater | I have #438 which can probably not be worked on much before another issue is fixed | 10:57 |
*** aday has quit IRC | 10:57 | |
*** aday has joined #buildstream | 11:10 | |
laurence | tlater, I think a comment in the description is best, with a link. | 11:22 |
laurence | think that's all you can do | 11:22 |
tlater | Aww | 11:22 |
noisecell | is there any way to define "Strict build plan" inside of project.conf? or this is only an user option? | 11:28 |
noisecell | https://buildstream.gitlab.io/buildstream/using_config.html#user-config <-- for reference | 11:29 |
noisecell | Phil, tlater ^^ ? | 11:29 |
tlater | Sorry, not certain. | 11:32 |
*** laurence has left #buildstream | 11:33 | |
Phil | noisecell, I don't know, sorry. | 11:37 |
noisecell | tlater, Phil, no worries. | 11:40 |
noisecell | can I assume that --no-strict will work for any command? or it will work for one? | 11:44 |
noisecell | it is not really clear in that "Note" | 11:45 |
noisecell | and I've checked that it doesn't work for `bst build` | 11:45 |
noisecell | $ bst build --no-strict systems/minimal-system.bst | 11:46 |
noisecell | Error: no such option: --no-strict | 11:46 |
tiago | On buildstream examples, it would be nice to display the output of `tree` under the project structure section to aid understanding the layout | 11:50 |
tlater | noisecell: You need `bst --no-strict build systems/minimal-system.bst` | 11:51 |
tlater | It's a flag on `bst` rather than `bst build` | 11:51 |
tlater | Which also means it works for every command :) | 11:51 |
gitlab-br-bot | buildstream: merge request (relative_workspaces->master: WIP: Patch for issue #191 support relative workspaces) #504 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/504 | 11:54 |
noisecell | tlater, I see, thanks | 12:40 |
*** bochecha_ has quit IRC | 12:46 | |
*** ernestask has quit IRC | 12:49 | |
gitlab-br-bot | buildstream: merge request (relative_workspaces->master: WIP: Patch for issue #191 support relative workspaces) #504 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/504 | 12:59 |
gitlab-br-bot | buildstream: issue #444 ("Reopening a closed workspace is not intuative") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/444 | 13:35 |
*** Prince781 has joined #buildstream | 14:43 | |
gitlab-br-bot | buildstream: merge request (coldtom/275->master: WIP: Indicate where artifacts are going to and coming from in the log) #518 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/518 | 14:52 |
*** laurence has joined #buildstream | 15:07 | |
*** mohan43u has quit IRC | 15:13 | |
*** mohan43u has joined #buildstream | 15:20 | |
*** mohan43u has quit IRC | 15:20 | |
*** tristan_ has joined #buildstream | 15:23 | |
*** mohan43u has joined #buildstream | 15:24 | |
*** bochecha_ has joined #buildstream | 15:24 | |
*** mohan43u has quit IRC | 15:25 | |
*** mohan43u has joined #buildstream | 15:27 | |
*** mohan43u has joined #buildstream | 15:29 | |
* tlater spots a tristan_ | 15:39 | |
tlater | I'd like to talk through the artifact expiry branch, if you have time | 15:39 |
*** slaf has quit IRC | 15:50 | |
*** slaf has joined #buildstream | 15:50 | |
*** slaf has joined #buildstream | 15:50 | |
*** slaf has joined #buildstream | 15:51 | |
*** dominic has quit IRC | 16:02 | |
*** Prince781 has quit IRC | 16:10 | |
*** Prince781 has joined #buildstream | 16:12 | |
*** dominic has joined #buildstream | 16:14 | |
*** dominic has quit IRC | 16:14 | |
*** Prince781 has quit IRC | 16:17 | |
*** tristan_ is now known as tristan | 16:25 | |
* tristan peels a spot off of his face | 16:25 | |
tristan | tlater, certainly, me too :) | 16:26 |
tristan | tlater, was in meeting before sorry | 16:26 |
tristan | still have some time ? | 16:26 |
tlater | Yep :) | 16:27 |
tristan | tlater, ok lets try to go over as much as possible, starting with the api for job behaviors thing | 16:27 |
* tristan prepares a browser tab | 16:27 | |
* tristan powers his laptop | 16:28 | |
tlater | Tsk, should always be on ;p | 16:28 |
tristan | attached chord | 16:28 |
tristan | sucks when it decides to just go off on it's own | 16:28 |
tristan | !347 | 16:29 |
* tlater hopes to never see that number again come Wednesday | 16:29 | |
tristan | big page to load | 16:30 |
*** jcampbell is now known as j1mc_polari | 16:31 | |
tristan | tlater, alright I think I'm refreshed (for as far as I've understood so far) | 16:33 |
tristan | tlater, do you understand what I mean in https://gitlab.com/BuildStream/buildstream/merge_requests/347#note_83809143 ? | 16:33 |
tlater | To save some loading; is that the part just under JOB_TYPES? | 16:34 |
tlater | Ah, yes | 16:34 |
tlater | Ok, so I understand that to a degree | 16:34 |
tristan | Alright, so basically, first forget QueueType and it's members, it's a misnomer | 16:34 |
tlater | Yep, I'd even like to rename those now. | 16:35 |
tristan | or the names are misleading, they are intended to depict a specific /behavior/, not inform the core of a specific queue and /it's behavior/ | 16:35 |
tlater | Now, Job types are still different from Queue types | 16:36 |
tristan | the way it currently works (worded this way because I expect your branch changes things significantly), is that the scheduler is layered with the concrete queue types (and the Scheduler itself) at the highest level | 16:36 |
tristan | I guess what I'm trying to explain, is the reasoning behind this is to keep the business logic on top, and keep the lower/core levels mechanic | 16:37 |
tristan | dont let knowledge of the business logic creep in | 16:38 |
tristan | to the abstract queue.py class or other components which are focused on providing the features for the business logic layer | 16:38 |
* tlater struggles to define something as "business logic" | 16:39 | |
laurence | Phil, thanks a lot for the status update on the ticket !! - https://gitlab.com/BuildStream/buildstream/issues/437#note_84013038 | 16:39 |
tristan | tlater, I see what you mean... this is all very mid-layer stuff | 16:39 |
tristan | tlater, in literal terms, I'd do backwards what you have done | 16:40 |
*** Phil has quit IRC | 16:41 | |
tlater | I.e., instead of having the scheduler figure out when to start running jobs, you'd rather have the jobs figure out when they are allowed to run? | 16:41 |
tristan | I.e. right now, iiuc, "The code which declares the job gives the scheduling logic it's $type, and the scheduling logic assigns a behavior to that" | 16:41 |
tristan | Instead I would have "The code which declares the job informs the scheduling logic of the desired behavior" | 16:41 |
tristan | directly | 16:42 |
tlater | Alright, I understand that on a high level | 16:42 |
tlater | My problem is trying to see how this api would work | 16:42 |
tristan | This basically keeps business logic parts in the right places... in this case the example is what features a Queue concrete class implement... the decisions a Queue class makes should all go in it's file | 16:43 |
tlater | Hm, I suppose each job could carry information that tells the scheduler what kinds of jobs it's allowed to run with? | 16:44 |
tristan | So that's the part I need a refresher on | 16:44 |
tristan | How is this part defined, let's see | 16:44 |
tlater | Currently, the scheduler picks off jobs from the queues whenever they become ready, but only starts running them when it has asserted the various conditions are met | 16:46 |
tlater | The conditions here aren't the usual "has my dependency been built", but "will I completely screw over another currently running job if I run" | 16:47 |
tlater | Which in my mind certainly is scheduler logic, and shouldn't go down to the queues/jobs | 16:47 |
gitlab-br-bot | buildstream: merge request (jmac/cas_virtual_directory->jmac/googlecas_and_virtual_directories_2: WIP: CAS-backed virtual directory implementation) #481 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/481 | 16:52 |
tristan | tlater, sorry | 16:53 |
tristan | I was going to say something here and said it in comments also... | 16:53 |
tlater | Ah | 16:53 |
tristan | tlater, looks like you have some cleanup of what is public/private to do also, and some missing API docs comments | 16:54 |
tlater | I agree, I was half expecting a comment on that anyway | 16:54 |
tristan | Right... so... | 16:55 |
tristan | What is the ideal logic | 16:55 |
tristan | I also checked out a local branch | 16:55 |
*** coldtom has quit IRC | 16:55 | |
tristan | tlater, how does the scheduler pick off jobs from the queues ? | 16:55 |
tristan | schedule_queue_jobs() | 16:56 |
tristan | ok, so scheduler is the one asking the jobs individually | 16:56 |
tristan | tlater, how come there is still a Queue.active_jobs member ? | 16:56 |
tristan | is it required elsewhere ? | 16:57 |
tlater | That's still there for frontend purposes | 16:57 |
tlater | The frontend wants to know which jobs belong to which queues, unfortunately. | 16:58 |
tlater | Hm, now that I think about that again, perhaps the frontend could ask the jobs what queue they belong to instead | 16:58 |
tlater | It still wants to know total number of jobs running in each queue, though, so that won't work | 16:59 |
tlater | Gah, well, in either case, the frontend is a bit entangled in this logic still | 16:59 |
tlater | And it's quite hard to break out completely. I'd rather leave that for a separate MR. | 17:00 |
tristan | errr, in any case what is here is futzed | 17:00 |
tristan | tlater, or, where do you keep the Queue.active_jobs lists up to date ? | 17:01 |
tristan | tlater, ok I'll comment on the issue there... | 17:02 |
tlater | Yep, I can have a look at that later - iirc the frontend at least updates some of those states | 17:02 |
tlater | (Which again is annoying) | 17:03 |
albfan[m] | tristan: heading to my first 70Gb gnome build | 17:04 |
* albfan[m] setup a flatpak gnome runtime to test cached build | 17:06 | |
* albfan[m] setup a build with flatpak gnome runtime to test cached build trees | 17:07 | |
tristan | tlater, grep tells me nothing updates that | 17:08 |
tlater | Hm, you're right, it's only failed elements | 17:10 |
tristan | noted in the MR... | 17:10 |
tristan | tlater, let's just not maintain it in two places | 17:10 |
tristan | tlater, I would also very much like to exclude Queues from what the frontend consumes as API if possible, but I think that's a song and dance and out of scope | 17:11 |
tristan | maybe if we could start with moving the active/failed/skipped jobs out of there and use an API on scheduler to report those as needed, it can be a good step | 17:12 |
tlater | Yep, that seems alright | 17:13 |
tristan | so this branch changes things so that storage of the running tasks is not delegated to a distributed handful of lists in Queues | 17:13 |
tristan | instead the active jobs are handled by the scheduler | 17:14 |
tristan | But, we do want per queue statistics (thinking out loud here) | 17:14 |
tristan | including a window on the active jobs | 17:14 |
tristan | We're adding more code about queues to scheduler, that's not ideal either | 17:14 |
tristan | Maaaaaybe it's better to settle for Queues being part of the frontend facing API | 17:15 |
tristan | tlater, I think that's less code and work all around | 17:15 |
tristan | however, we *have* to avoid the maintaining of two cached list values | 17:15 |
tristan | tlater, before this, we didnt have waiting jobs, why do we have it now ? | 17:16 |
tlater | tristan: Because of the condition that some jobs aren't allowed to start running until a cleanup job has finished, but a cleanup job may not be allowed to start because other jobs are already running | 17:17 |
tlater | So we end up with a few jobs that, while technically capable of running, can't run because they'd end up running in parallel with others | 17:17 |
tristan | Right | 17:18 |
*** jonathanmaw has quit IRC | 17:18 | |
tristan | tlater, I think it's sensitive to carry these ready jobs around | 17:19 |
tristan | tlater, and I think carrying them around, is because we don't have a Queue.peek_ready() method | 17:19 |
*** bethw has quit IRC | 17:20 | |
tlater | I think that would do it, yes. | 17:20 |
tlater | In which case we could avoid the waiting job, and perhaps even the active job queues | 17:22 |
tristan | We need the active jobs of course | 17:23 |
tristan | so we can suspend them and terminate them | 17:23 |
tristan | and resume them | 17:23 |
tristan | ok so far not bad... | 17:25 |
*** jonathanmaw has joined #buildstream | 17:34 | |
tristan | tlater, ok so I think most of what we looked at is pretty straight forward, mostly the question is; what is the right API for the main issue | 17:35 |
tristan | job behaviors | 17:35 |
tlater | My new idea is to essentially piggyback the current dict on job objects | 17:36 |
tristan | dict-on-job-objects ? | 17:37 |
tlater | Not quite. Job objects have "exclusive" and "priority" attributes | 17:38 |
tlater | Which encodes the same information | 17:38 |
tristan | So the scheduler pops off these jobs from queues... to be honest now that we look at it... having a QueuePool implement a JobSource, and having another JobSource for cleanup jobs, would be really nice | 17:38 |
*** jonathanmaw has quit IRC | 17:38 | |
tristan | anyway... woosh that's not a necessary refactory haha :) | 17:38 |
tristan | Ah right | 17:38 |
tristan | tlater, ok so, by feeding the scheduler a job with the exclusive and priority attributes, the scheduler knows what to do ? | 17:39 |
tristan | tlater, I think you need more than that | 17:39 |
tristan | if I understand the values of these attributes are not simple | 17:39 |
tristan | they require knowledge of the color of other jobs being processed | 17:40 |
tlater | Yes, but the scheduler knows about those | 17:40 |
tlater | The question is; should it? | 17:41 |
tristan | No, I'm looking for the code block again but almost certainly no it should not | 17:41 |
tlater | It's in sched() | 17:41 |
tristan | JOB_RULES, right | 17:42 |
tlater | The any() statements | 17:42 |
tristan | right right, that's certainly gotta go | 17:42 |
tristan | the thing is, you are saying that a job is "exclusive of other specific job types" | 17:43 |
tristan | that's what we gotta rethink | 17:43 |
tristan | tlater, I have an idea | 17:45 |
tristan | tlater, basically, what we want to say to the scheduler, is one of three things: "I dont care about resource FOO", "I need to access resource FOO" and "I need exclusive access to resource FOO" | 17:46 |
tristan | tlater, where resource FOO is the "artifact cache" here | 17:46 |
tristan | tlater, so maybe we want domains and a read-write lock kind of API for those domains | 17:46 |
tristan | does that work for these cases ? | 17:47 |
tristan | it works for "exclusive" | 17:48 |
tristan | that is the lock, it could be extended to cover more things I suppose | 17:48 |
tristan | priority could be: "I want nothing of lower priority to be put in a run state before me" | 17:49 |
tlater | Hm | 17:50 |
tlater | It does work for exclusive, yes, but it will have three states | 17:50 |
tristan | not sure about priority | 17:50 |
tlater | Priority is harder, yes | 17:50 |
tlater | It still requires knowledge of other jobs | 17:50 |
tristan | exclusive has three possible values yes | 17:50 |
tristan | Hrrrmmm | 17:52 |
tristan | tlater, do we need priority at all ? | 17:52 |
tlater | I left a comment about that on JobType | 17:52 |
tlater | Or JOBTYPE? | 17:52 |
tristan | tlater, first better question, why does CLEAN not have priority over PUSH ? | 17:53 |
tlater | Because PUSH does not modify the artifact cache | 17:53 |
tristan | if CLEAN needs to wait for all PUSH jobs to complete, I would think it should have priority | 17:53 |
tlater | It simply needs a read lock | 17:53 |
tlater | It does not need priority, because the cache size won't be changed by push jobs | 17:54 |
tristan | right but it looks like you risk starvation | 17:54 |
tlater | How? | 17:54 |
tristan | if for instance I'm building something with large artifacts and I have to push a lot over a sluggish network | 17:54 |
tlater | Yes, agreed | 17:55 |
tristan | my build jobs wont run because I have a pending clean | 17:55 |
tlater | But my pushes are taking forever | 17:55 |
tristan | and clean wont run because of pushes, push bottlenecks me | 17:55 |
tlater | Hmmm | 17:55 |
tlater | Do we even need to worry about push? | 17:55 |
tristan | so I'd rather have everything built with a big full pending queue in push if that's the case | 17:55 |
tlater | Technically we can never push an artifact that will be deleted | 17:55 |
tristan | tlater, there was another point to this... | 17:56 |
tristan | which is: lets just remove priority from the API | 17:56 |
tristan | tlater, the one who creates the Job says what resources it needs to access | 17:57 |
tristan | tlater, maybe for now it's a tristate, if we have more resources though it should be a hash table of booleans I guess | 17:57 |
tristan | i.e. the read-write locks | 17:57 |
tristan | and the scheduler takes care of guarding against writer starvation by making it priority | 17:58 |
tlater | tristan: so we consider PUSH exclusive as well? | 17:58 |
tristan | why should PUSH be exclusive ? | 17:59 |
tristan | that seems strange | 17:59 |
tristan | tlater, we have to make CLEAN exclusive of PUSH | 17:59 |
tristan | no choice | 17:59 |
tlater | Yes, that is what I meant :) | 17:59 |
tlater | Do we consider other resources besides the cache yet? | 18:01 |
albfan[m] | Is there any open issue about download speed, I find it downloads so slow, at least flatpak runtimes (comparing with flatpak install) I tried high numbers on fetchers but no changes | 18:01 |
tristan | so, it's a scheduler with read-write locking of resources, which prioritizes the exclusive locks by not allowing any further tasks to run which require the same resource, until it | 18:01 |
tristan | albfan[m], if you flatpak install a runtime that you download with ostree clone or an ostree element in bst build... flatpak is faster ?? | 18:02 |
tristan | albfan[m], that sounds quite odd | 18:02 |
tristan | albfan[m], are you using https maybe ? | 18:03 |
tristan | when you have the gpg key for an ostree repo, it should usually be quicker to use http | 18:03 |
tristan | tlater, I dont know what other resources though :) | 18:04 |
tristan | heh | 18:04 |
tristan | tlater, for now I guess it's a tristate | 18:04 |
tristan | tlater, wait a sec... | 18:05 |
tristan | tlater, there is a BETTER idea, but it hasn't completely taken form | 18:05 |
tristan | tlater, the better idea is that A.) There are resources... B.) Some jobs need them, sometimes exclusively... C.) Some resources are limited | 18:06 |
albfan[m] | tristan: like 8Mb/s with flatpak and 800Kb/s with bst | 18:06 |
tristan | tlater, then we get to replace job tokens and QueueType with a limited resource type | 18:07 |
tristan | a limited resource means of course that only a limited number of jobs can use it at the same time | 18:07 |
tlater | Oh | 18:07 |
tlater | Yes, that's cute | 18:07 |
tlater | And we only have one cache, so that can only be exclusively used by one job at the same time | 18:08 |
tristan | well, I think each resource is a "name" anyway :) | 18:09 |
tristan | even if we had two caches, I dont know if a job would be satisfied exclusively accessing a given one | 18:09 |
tristan | (at random) | 18:10 |
tristan | CPUs could be more elaborate and different and fancy of course, but ... for now it's just processing | 18:10 |
tlater | Yes, I don't think multiple caches make much logical sense in our case | 18:10 |
albfan[m] | bad thing too is that stopping current download I loose all fetched bits, there's no recover from what I have downloaded if I ctrl+c | 18:10 |
tlater | I'm just thinking about how the implementation would handle it :) | 18:10 |
albfan[m] | but yes gnomesdk: https://sdk.gnome.org/ is in my project.conf | 18:11 |
tristan | albfan[m], hrmmmm, care to file an issue about that ? | 18:11 |
tristan | albfan[m], sounds completely overlooked | 18:11 |
tristan | attempted resumed downloads would be awesome but I wouldn't hold my breath | 18:12 |
tristan | at least ability to let the user try to deal with it, in the rare case that the user wants the tempdir / partial download, could be easy | 18:13 |
albfan[m] | tristan: I found the place for that buildstream/_ostree.py fetch I will try to compare with flatpak install | 18:13 |
tristan | tlater, so... sounds like we have a good plan I think ? | 18:15 |
tlater | Yep, this sounds good | 18:16 |
tlater | Not too hard to implement either, I think, so that's a plus | 18:16 |
tristan | tlater, before wrapping up | 18:17 |
tristan | tlater, where does the cleanup job get created ? | 18:17 |
tlater | BuildQueue, iirc | 18:17 |
tlater | After it finishes a build job | 18:18 |
tristan | looks like you stopped using that stateful _start_cleanup var thing, but left it in the Scheduler's __init__ | 18:18 |
tlater | Eep | 18:19 |
* tlater assumes this happened while rebasing | 18:19 | |
tlater | That was a very messy rebase |: | 18:19 |
tristan | tlater, looks like you can easily schedule more than one cache size calculator job | 18:20 |
tlater | Yes... I saw little reason not to | 18:20 |
tristan | looks like it would be nice to have a home for this logic to live | 18:21 |
tlater | Except for taking longer | 18:21 |
tristan | rather than a mutant limb of BuildQueue | 18:21 |
tristan | i.e. ensuring there is only ever one, knowing that it's queued and you dont need two of them | 18:21 |
tristan | tlater, looks like you also fail to do it in the PullQueue | 18:22 |
tlater | Ah right | 18:23 |
tristan | after having added data to the artifact cache and increased it's size | 18:23 |
tlater | Hm | 18:23 |
tristan | maybe best to have some helper function directly on Scheduler in this case | 18:23 |
tlater | I used to have that, I suppose that's nicer after all | 18:23 |
tristan | like to manage if there is an ongoing job internally and not redundantly queue it | 18:24 |
tristan | it's a concession, would be nice to not have the scheduler know about the cache specifically | 18:24 |
tristan | it's still a mutant limb, but its now shared by multiple other limbs | 18:24 |
tlater | Yeah, eventually it might be nicer to have something else launch these jobs | 18:25 |
albfan[m] | tristan: let me use first flathub instead of sdk.gnome.org | 18:28 |
tristan | tlater, I feel like event source objects is the way to go | 18:29 |
tristan | tlater, our own little loop, every time the scheduler wakes up it checks what event sources are ready to be dispatched, and we have one which does the element queues and resulting side effect events | 18:30 |
tristan | but meh | 18:31 |
tristan | it's possibly over engineered :) | 18:31 |
tristan | (not "our own little loop" really, still asyncio underneath of course) | 18:31 |
tlater | Hm, well, ok, we have a plan for now | 18:31 |
* tlater hopes he'll manage before juergbi returns and screws over his branch *again* ;) | 18:32 | |
tristan | I'm rooting for you | 18:35 |
tristan | :) | 18:35 |
*** tristan has quit IRC | 18:39 | |
albfan[m] | tried with flathub, definitely slower, I will do some research and file a bug | 18:41 |
albfan[m] | flatpak (8Mb/s) ostree (6Mb/s) and Ostree from python (Builstream) 1Mb/s, asking on #flatpak for more info | 18:53 |
albfan[m] | have to make a try without any download in ~/.local/share/flatpak/ | 19:00 |
gitlab-br-bot | buildstream: merge request (caching_build_trees->master: Caching buildtrees) #474 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/474 | 19:01 |
albfan[m] | anyway. I have my gtksourceview builded against gnome sdk with the caching_build_trees branch (just rebased it on top of master) let's try to run some test! | 19:02 |
*** laurence has left #buildstream | 19:27 | |
*** laurence has joined #buildstream | 19:27 | |
gitlab-br-bot | buildstream: merge request (caching_build_trees->master: Caching buildtrees) #474 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/474 | 20:58 |
albfan[m] | oh, hehe this ^^ is my rebase (I was confused) | 21:09 |
albfan[m] | tristan: is it supposed that cached build trees could be used into a bst shell? | 21:11 |
*** Prince781 has joined #buildstream | 21:13 | |
*** aday has quit IRC | 21:21 | |
*** tristan has joined #buildstream | 21:22 | |
*** j1mc_polari has quit IRC | 21:34 | |
*** jcampbell has joined #buildstream | 21:36 | |
*** jcampbell has quit IRC | 21:47 | |
*** jcampbell has joined #buildstream | 21:49 | |
gitlab-br-bot | buildstream: issue #445 ("Slow downloads on BuildStream") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/445 | 22:04 |
*** aday has joined #buildstream | 22:31 | |
*** bochecha_ has quit IRC | 22:40 | |
*** aday has quit IRC | 22:48 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!