juergbi | benschubert: hm, the build is failing with up-to-date casd? i.e., getting OOM killed at capture? that would be a different issue than what WSalmon reported | 00:12 |
---|---|---|
*** benschubert has quit IRC | 00:13 | |
*** benschubert has joined #buildstream | 09:19 | |
benschubert | juergbi: let me update everything and come back to you :) | 09:19 |
benschubert | I need to see if it's also not simply llvm that's using too much | 09:19 |
juergbi | right, the llvm build can use a lot of memory if you use high parallelization | 09:20 |
benschubert | yeah I'll try with less first | 09:20 |
benschubert | seems like the default of 8 might be too much | 09:20 |
juergbi | benschubert: did you see an OOM kill, the terminated process in the kernel log might already provide a hint in that case | 09:21 |
juergbi | if the compiler was killed it is definitely not related to the capture phase | 09:21 |
benschubert | juergbi: the kernel logs where only showing messages about fuse cleaning up stuff, I did not have more history than that? | 09:22 |
benschubert | more accurately: | 09:22 |
benschubert | [ 2577.966320] fuse: trying to steal weird page | 09:22 |
benschubert | [ 2577.966323] page=000000004335e751 index=47445 flags=80000000000000ad, count=1, mapcount=0, mapping= (null) | 09:22 |
juergbi | ah, right, that message is quite spammy, hopefully fixed soon in master | 09:22 |
juergbi | you don't have journald? | 09:22 |
benschubert | nope, running in docker | 09:23 |
benschubert | on a windows machine :) | 09:23 |
juergbi | ah, I see | 09:23 |
*** doras has joined #buildstream | 10:36 | |
doras | I really need a way to force CAS cleanup. It's getting far too large and once I set a quota is just says "Error instantiating artifact cache: Your system does not have enough available space to support the cache quota specified." | 10:37 |
doras | ... instead of try to cleanup right there and then. | 10:38 |
benschubert | doras: BuildStream master or 1.4? | 10:39 |
doras | 1.4 | 10:39 |
benschubert | what's the size of your cache? If it's very big, it might be more worth wiping it completely and rebuild your project, bst 1.4 cache cleanup is notoriously slow | 10:40 |
doras | Deleting it means 3 hours of pulling | 10:42 |
doras | And doing it every few weeks is really getting old. | 10:43 |
benschubert | doras: there's a lot of improvements made on the master branch for such things. We can't backport them however | 10:43 |
benschubert | I haven't touched the 1.4 cache for a very long time. can you du -hs ~/.cache/buildstream/* please? and how much disk space do you have available? | 10:44 |
benschubert | Note that if your cache is around 2TB currently, cleaning it will take more than 8hours, so 3h of download might be better in that case | 10:45 |
benschubert | https://gitlab.com/BuildStream/bst-plugins-experimental/-/merge_requests/123 if anyone has time for reviews :) | 10:48 |
benschubert | juergbi: can confirm it was not a buildbox problem :) | 10:59 |
*** tomaz has joined #buildstream | 11:04 | |
tomaz | I'm trying to port a specific application from flatpak-builder to buildsteram, and while trying to run it I'm having this error: | 12:06 |
tomaz | https://pastebin.com/yjD9xWNx | 12:06 |
tomaz | aparently it can't access the display :0 - anyone can share a ligth? | 12:07 |
juergbi | benschubert: good to know | 12:10 |
juergbi | tomaz: you might be missing configuration for bst shell to allow access to display etc. | 12:11 |
juergbi | this could be useful as starting point: https://gitlab.gnome.org/GNOME/gnome-build-meta/-/blob/master/project.conf#L157 | 12:11 |
tomaz | juergbi: on the pastebin there's the environment variables to access the project. /me looking the project conf | 12:14 |
tomaz | juergbi: yeah, I'm setting everything that the configuration you send me sets. | 12:14 |
tomaz | (on the file app.yml) | 12:15 |
juergbi | tomaz: also the host-files? e.g. ${XDG_RUNTIME_DIR}? | 12:16 |
juergbi | the xauth file might be in there | 12:16 |
juergbi | this may also depend on your host system / display server | 12:17 |
tomaz | hm, no, not that one, thanks for pointing it out. | 12:17 |
tomaz | this needs to be in the sdk (in that case, gnome-build-meta) or is should be set per project? (currently I'm trying to build dolphin) | 12:17 |
juergbi | if that doesn't help you could try disabling xauth access control | 12:17 |
juergbi | it's per project | 12:18 |
tomaz | cool cool. | 12:18 |
juergbi | (include files may be helpful to avoid duplication) | 12:18 |
tomaz | and yes, it was there (I just didn't pay attention) - I coppyed the app.yml file from the transmission buildstream repo. | 12:20 |
benschubert | tristan: are you around and have 5 minutes? :) | 14:55 |
tristan | benschubert, Yeah | 15:14 |
tristan | I will probably run off in a bit, I was curious and giving this a try: https://cython.readthedocs.io/en/latest/src/tutorial/profiling_tutorial.html | 15:15 |
tristan | not super verbose but a bit helpful | 15:15 |
benschubert | tristan: ok , then quick I'll be: I think that if we want to make the threading approach really neat, we need to stop having a "Messenger" object on the context that is used everywhere, but rather control the scope of them explicitely. | 15:19 |
benschubert | 1) Would you rather have this in the same MR or should I do this separately | 15:19 |
benschubert | 2) This is only for "jobs" so I could move everything that _might_ be running in jobs to take it more or less explicitely, or just get rid of it in the context entirely and pass it where relevant | 15:19 |
tristan | Interestingly, it's telling me that the majority of time spent from Variables.check() (which does most resolution), is actually spent in Value.resolve(): https://gitlab.com/BuildStream/buildstream/-/blob/tristan/partial-variables/src/buildstream/_variables.pyx#L522 | 15:19 |
tristan | 0.061 in Value.resolve() out of 0.070 time spent in Variables._resolve() | 15:20 |
tristan | benschubert, I'm not sure I understand or am on board with that | 15:20 |
tristan | benschubert, Maybe we want separate instances of a messenger object ? | 15:21 |
benschubert | yep that's it | 15:21 |
tristan | Regarding Messenger and Context separation, it appears to me that it was mostly already done | 15:21 |
benschubert | and that means that an object might use different instances of it | 15:21 |
benschubert | well, everything in the codebase is reaching the messenger through the context | 15:22 |
tristan | Messenger was created, before that it was methods handled on Context, I think that separation makes sense | 15:22 |
benschubert | which makes separation hard | 15:22 |
tristan | Right, that could be split up | 15:22 |
tristan | At the cost of minor burden | 15:22 |
benschubert | I could also keep it for the things that do not run in the scheduler and then for the rest pass it more or less explicitely around | 15:23 |
benschubert | E.g. add a _messenger on Plugin, and a use_messenger(messenger) context manager on them | 15:23 |
tristan | Well, if we kept Messenger accessible on Context, it would be a good thing to have a single focal point when a thread is created, to doctor the Messenger in the child thread (or replace it with a new one which has access to a Queue or such) | 15:24 |
tristan | What I hope to preserve is that code needs not know where it's running of course | 15:25 |
tristan | and that we don't have to explicitly pass things around | 15:25 |
tristan | I would guess that job base class takes care of this | 15:25 |
benschubert | tristan: the problem comes with the Job that sends messages back to the parent | 15:25 |
benschubert | where we actually need to be able to access both | 15:25 |
benschubert | so we can't really use a "thread-local" approach for better performance | 15:25 |
benschubert | we need a finer grain handling | 15:26 |
tristan | Ummm | 15:26 |
benschubert | it's all inter-tangled :/ | 15:26 |
tristan | So you want to marshal messages in the same memory space without sending them over a queue | 15:26 |
benschubert | yep | 15:27 |
benschubert | so we can avoid having to pass everything through the asyncio loop | 15:27 |
benschubert | which is one of the two biggest slowing points by moving to threads | 15:27 |
tristan | Maybe it's better to do that separately as a performance enhancement, it will involve some explicit locking and such | 15:27 |
benschubert | not necessarily | 15:28 |
tristan | it's just a lot to think about in a single review process, I get where you're going | 15:28 |
benschubert | Yeah, but on the other hand I'd rather not have a half-backed branch go in :) | 15:28 |
benschubert | since we're speaking about a x2 slowdown | 15:28 |
tristan | I do think there is a part of this I don't get though | 15:28 |
benschubert | otherwsie a feature-branch with multiple MRs going to it, then a massive MR to master | 15:29 |
benschubert | ah? | 15:29 |
tristan | " but rather control the scope of them explicitely. " | 15:29 |
tristan | I mean, I can envision a world where messages get added to a queue (not an IPC) for the frontend to process serially, in the same memory space | 15:29 |
tristan | That much I grok | 15:29 |
tristan | What I don't grok is why various parts of the code need to have any explicit knowledge of how messages get marshalled, or what extra explicit stuff needs to happen | 15:30 |
tristan | Maybe get to the merge request, and then we start figuring out how to make this transparent to Message() API clients | 15:31 |
benschubert | Ok, an example: let's take an Element. It runs some stuff in the background threads when running through jobs, and other stuff in the main thread (e.g. is_cached()). In both you need an access to a different Messenger | 15:31 |
tristan | Do you though ? | 15:31 |
benschubert | the background thread will write to a file, have a difference "silence" context and suh | 15:32 |
tristan | Maybe without process separation, you just have a single messenger and a mutex | 15:32 |
tristan | Ah that | 15:32 |
benschubert | yeah | 15:32 |
tristan | That could just be a pid() check ? | 15:32 |
tristan | But still keep the same messenger ? | 15:32 |
benschubert | but then that means that you have: | 15:33 |
benschubert | message sent from job -> messenger -> queue -> root messenger | 15:33 |
benschubert | mmh we could actually use a stack of messenger | 15:33 |
tristan | You have "parts of the code base issuing messages", and a focal point (Stream) for an abstract frontend to display/log main messages | 15:34 |
tristan | All in one memory space (at least for Stream and below it) | 15:34 |
benschubert | Nope, the messenger is also used for the communication between child and parent jobs | 15:34 |
benschubert | uh that's actually maybe the pain point I need to solve first | 15:34 |
tristan | But with threading, those are the same right ? | 15:34 |
benschubert | well I'm trying to merge them but can't right now, since they run in different threads | 15:35 |
tristan | Ohhh is the messenger hijacked and mixed into the parent/child stuff ? | 15:35 |
benschubert | which allowed for an easy messenger separation through locals() | 15:35 |
benschubert | yeah | 15:35 |
benschubert | ok I guess I need to solve this first actually | 15:35 |
tristan | That used to be just a discrete queue between Job parent and child | 15:35 |
tristan | That needs to go away first I'd expect, and the rest gets simplified yeah | 15:35 |
benschubert | if envelope.message_type is _MessageType.LOG_MESSAGE: | 15:36 |
benschubert | # Propagate received messages from children | 15:36 |
benschubert | # back through the context. | 15:36 |
benschubert | self._scheduler.notify_messenger(envelope.message) | 15:36 |
benschubert | Is the real pain point | 15:36 |
benschubert | because at that point, you want the main thread's messenger | 15:36 |
tristan | Ahhh right, so Jobs had their discrete Queue, which was also used to marshal messages, along with it's internal IPC | 15:36 |
benschubert | yeah | 15:36 |
tristan | Then Messenger object was born at some point, and got mixed up in all of this | 15:36 |
benschubert | and then you get the notifier | 15:37 |
tristan | Uh oh | 15:38 |
benschubert | which is even worse, it is from the App, but called and calls to the scheduler | 15:38 |
tristan | The notifier sounds like the madness which got introduced to the scheduler, instead of passing State objects around everywhere | 15:38 |
tristan | We can kill that off with a firing squad | 15:38 |
benschubert | https://gitlab.com/BuildStream/buildstream/-/blob/master/src/buildstream/_stream.py#L1614 is where it gets murky | 15:41 |
tristan | Yeah | 15:41 |
tristan | I've been planning on nuking that all to hell | 15:41 |
benschubert | How would you see that instead? | 15:41 |
tristan | It may make sense to do that first and put it at the beginning of your MR, maybe even merge the nuking of that first | 15:42 |
benschubert | Yeah, might make my whole job easier | 15:42 |
tristan | So originally, we had Context (no Messenger) | 15:42 |
benschubert | How would you see that? Direct 'state' access in the scheduler? | 15:42 |
tristan | And that had a callback which the frontend can use to display messages | 15:42 |
tristan | *YES* | 15:42 |
tristan | That is exactly what state is for | 15:42 |
* benschubert is not clear about what Stream is meant to do | 15:42 | |
tristan | It is a goal to make sure that the frontend only ever talks to Stream | 15:43 |
tristan | And possibly Context and Project and State | 15:43 |
tristan | Stream is the main interface to the core | 15:43 |
benschubert | when we were talking about splitting scheduler and frontend in two processes, Stream would have been the entry point in the child process right? | 15:44 |
tristan | Frontends can be implemented in various ways, so long as Stream draws a hard line | 15:44 |
benschubert | That's how it should be seen | 15:44 |
tristan | Right | 15:44 |
tristan | Stream would be created in the frontend | 15:44 |
tristan | And Stream would decide on it's own to create a subprocess to run the scheduler in | 15:45 |
benschubert | Then how do you handle https://gitlab.com/BuildStream/buildstream/-/blob/master/src/buildstream/_stream.py#L1614 ? That seems to be callbacks that should be sent back "up" ? | 15:45 |
tristan | But Stream already was the main abstraction point before starting talking about process separation, it would have stayed that way | 15:45 |
tristan | State was created so that it could be updated within the core and changes received in the frontend | 15:46 |
tristan | All passed through State | 15:46 |
tristan | State state = stream.get_state() | 15:46 |
tristan | state.register_callback("foo", self.foo_changed_cb) | 15:47 |
tristan | in the core... State is passed through constructors | 15:47 |
tristan | And the core says stuff like state.set_foo() | 15:47 |
tristan | Where internally state says { if foo != self.foo: self.foo = foo; run_callbacks("foo") } | 15:48 |
tristan | like that | 15:48 |
tristan | benschubert, if ever process separation happened, State internally would need to grow a Queue, so that all that explicit extra codepaths running through the Scheduler, would have never needed to happen | 15:48 |
tristan | State would take care of knowing it's in what process and where to deliver the notifications to | 15:49 |
tristan | core just gets a State object passed around, and frontend just listens to a State object, and if process separation happens, State has to work it out while providing everyone with the same functioning API | 15:50 |
benschubert | Ok, I think I understand the gist, but not sure I'm confident of having clear vision on how to do this | 15:51 |
benschubert | And digging around the messenger it seems I could also do without that change for now, so I'll probably let you get that one | 15:52 |
tristan | If you start from a fresh branch off master, and then give the State()/Messenger() instances to Scheduler and Queue constructors and whoever needs it, remove this crazy manual message marshalling function, then there will be gaps to fill | 15:53 |
tristan | Things which relied on this extra "notification" thing just call directly into the State they were given, or the Messenger they were given | 15:53 |
tristan | State for state changes and Messenger for events/messages | 15:53 |
tristan | Alright :) | 15:54 |
benschubert | Oh wait, you mean basically nuking 'MessageType' and calling into messenger / state instead? | 15:54 |
tristan | MessageType is required | 15:55 |
tristan | It's a part of the message | 15:55 |
tristan | it indicates START/SUCCESS/FAIL/SKIPPED etc, it's the component of a Message() which is the thing that get's pumped through the Messenger() queue | 15:55 |
benschubert | ah we're not talking about the same one then I think | 15:56 |
benschubert | https://gitlab.com/BuildStream/buildstream/-/blob/master/src/buildstream/_scheduler/jobs/job.py#L74 | 15:56 |
tristan | Ah yeah, that's different | 15:56 |
tristan | That used to just be strings I think | 15:57 |
tristan | benschubert, *that* thing only goes away with your branch | 15:57 |
benschubert | true | 15:57 |
tristan | The scheduler getting it's dirty hands involved in stage change notifications and message delivery, is what I want to remove | 15:57 |
tristan | basically that notification handler | 15:58 |
tristan | scheduler._notify() goes away | 15:58 |
benschubert | https://gitlab.com/BuildStream/buildstream/-/blob/master/src/buildstream/_scheduler/scheduler.py#L605 so we would remove 'NotificationType' right? | 15:58 |
tristan | Notification and NotificationType goes away | 15:58 |
benschubert | ok I *think* I got it | 15:59 |
benschubert | I'll try a MR today and if I didn't get it right, feel free to take over/start anew ? | 15:59 |
tristan | Yeah, all that extra noise getting in between people sending messages and notifying state changes, and the receiving State and Messenger objects | 15:59 |
tristan | Yeah | 15:59 |
tristan | https://gitlab.com/BuildStream/buildstream/-/blob/tristan/partial-variables/src/buildstream/_variables.pyx#L522 <-- bottleneck is here :-S | 16:00 |
* tristan goes and get's chicken | 16:00 | |
benschubert | haha, enjoy! | 16:00 |
tristan | benschubert, it looks like creating that list, and calling "".join() is in fact the hottest part of the whole thing | 16:00 |
tristan | .061 out of 0.070 is spent there | 16:00 |
benschubert | tristan: I would not have guessed :/ | 16:01 |
tristan | So all of the recursion stuff is quite performant with the arrays and everything | 16:01 |
tristan | Yeah me neither :-/ | 16:01 |
tristan | It could be I could run through the arrays, count up the size, use a loop of strncpy() into an allocated buffer, and return a fresh string | 16:01 |
tristan | and make it faster just there | 16:01 |
tristan | I don't know | 16:01 |
tristan | It could also be the nature of the data from valentind's variable test which causes the hotspot to go there | 16:02 |
tristan | should verify first | 16:02 |
benschubert | you can try on fsdk too | 16:02 |
tristan | yeah, what's the branch which works with bst2 ? | 16:03 |
* tristan could run show on that to help benchmark | 16:03 | |
tristan | valentind's test data has a lot of variables with many many components | 16:03 |
tristan | where real life has more variables with lest components | 16:03 |
tristan | less | 16:04 |
benschubert | I'm not sure what's the branch to use for that nowadays | 16:06 |
benschubert | haven't looked at it for quite a bit | 16:06 |
Frazer61 | Hi, just made the proposal for bst source show. I was wondering though about pylint, i noticed the pylint config, and most likely pylint itself is quite outdated and doesnt support all the new features, or enable a few things that could be beneficial, such as enforcing LF for the end of lines. Is this something you would be interested in me making | 16:26 |
Frazer61 | a patch for? | 16:26 |
benschubert | Frazer61: sure, that would be beneficial! | 16:32 |
benschubert | don't hesitate to ask here if you are unsure whether a new rule should be enabled or not :) | 16:32 |
Frazer61 | ok, do you want me to make a patch to show off, i have done some work locally | 16:41 |
benschubert | You can always make a MR it's good for visibility | 16:42 |
Frazer61 | will do :) | 16:42 |
tristan | Frazer61, we've updated it a few times over time, it of course gets outdated and is work to support new errors, patches pretty much always welcome for that yes :) | 17:00 |
tristan | Enabling new pylint errors, or errors which are currently disabled, can be more difficult :) | 17:01 |
benschubert | But might be worth it if you have the courage (but better to ask first for those :D) | 17:01 |
benschubert | tristan: https://gitlab.com/BuildStream/buildstream/-/merge_requests/1985 I've started | 17:01 |
benschubert | currently working on the `retry` part | 17:01 |
benschubert | tristan: does it look like what you were thinking? | 17:01 |
tristan | Frazer61, I'll probably not have time to reply to that until monday | 17:02 |
tristan | two things I can say off hand is that (A) It would be good to follow the same API as `bst show`, i.e. use a `--format` parameter... (B) There is no way to know what a "git" plugin is, i.e. filtering output based on plugin 'kind' strings is not meaningful because a plugin 'kind' can mean something different in every project (even multiple projects connected through junctions) | 17:04 |
tristan | benschubert, yeah that looks like it's on the right track | 17:06 |
tristan | benschubert, ideally the whole Notification/NotificationType would be removed of course | 17:06 |
benschubert | tristan: yep, I'm going part by part to make the commits easier to review :) | 17:06 |
benschubert | For SUSPEND/QUIT/etc, I'll make methods in State, where you can register a callback, does that sound good? | 17:07 |
tristan | That sounds good, I'm not sure what the right API is for that, previously that part is mostly controlled by the frontend anyway | 17:08 |
benschubert | I think frontend -> stream -> state -> whatever is the logical poitn there | 17:08 |
tristan | But if the frontend needs to be told, it could be good to have a kind of execution state | 17:08 |
benschubert | so the only two real interface between stream and scheduler, are messenger and state | 17:08 |
tristan | Yeah, but these notifications look weird because... the frontend basically tells the stream to suspend | 17:09 |
benschubert | Yeah we have that in Stream for now | 17:09 |
tristan | and to resume | 17:09 |
benschubert | which is right no? | 17:09 |
tristan | So I don't know why the frontend needs to be told | 17:09 |
benschubert | ah yeah absolutely :) | 17:09 |
benschubert | for the "scheduling state" (terminated/running/etc), I guess moving that from Stream to state is the better way? | 17:10 |
tristan | If we do need that state tracking and notification for some reason, it could be good to have state have an API like State.set_execution_state(ExecutionState.SUSPENDING | ExecutionState.SUSPENDED | ExecutionState.TERMINATING ...) | 17:10 |
benschubert | yup, will see what I need | 17:10 |
tristan | and have a single notification "execution-state-changed" | 17:10 |
tristan | benschubert, regarding your last, I don't think so | 17:11 |
tristan | I mean, it depends | 17:11 |
benschubert | So keeping that in stream? | 17:12 |
tristan | I think that the frontend controls everything via Stream | 17:12 |
tristan | And I think the frontend cannot call things on State which "cause things to happen" | 17:12 |
benschubert | right | 17:12 |
tristan | For the frontend, State is only something to add listeners to and update UI | 17:12 |
benschubert | yeah I don't think I did anything there that changes that view | 17:12 |
tristan | So if there is an execution-state-changed signal/notification, that goes on State | 17:12 |
tristan | But the frontend should still be calling Stream.suspend() and such | 17:13 |
* tristan not sure what the API exactly is named | 17:13 | |
benschubert | yeah I think we're on the same page there | 17:13 |
tristan | Right, we probably agree, I'm just being pedantic :) | 17:13 |
benschubert | anyways, with well separated commit, it should be easy to change in the worst case | 17:13 |
tristan | This will remove a bunch of unneeded redirection of API calls, will be good :) | 17:14 |
tristan | Hopefully make your refactor easier | 17:14 |
tristan | interesting, the string "/usr" is 16 bytes long | 17:42 |
* tristan might have a memcpy version that'll work | 17:42 | |
tristan | with https://github.com/cython/cython/blob/master/Cython/Includes/cpython/unicode.pxd | 17:42 |
benschubert | nice :) | 17:45 |
benschubert | I'm almost done too with the notification stuff | 17:45 |
benschubert | tristan: for 'interrupt', do you prefer: | 17:48 |
benschubert | - callback set on 'state' | 17:48 |
benschubert | - callback set on 'stream' | 17:48 |
benschubert | ? I think a callback on 'stream' would be nicer (we could have stream pass the callback to Scheduler.run() ? | 17:48 |
benschubert | I actually think we should do that for the last 5 types: tick,suspend,unsuspend,quit, terminate, just pass callbacks | 17:48 |
benschubert | it seems weird passing callbacks through the state for such | 17:54 |
tristan | I don't know if I follow | 17:57 |
tristan | Ummm | 17:57 |
tristan | So right, the scheduler handles the interrupt and informs the frontend | 17:57 |
benschubert | which would not happen like this if they were separated processes | 17:57 |
tristan | it is not the frontend who catches ^C and decides to tell the stream to suspend | 17:57 |
tristan | Or is it ? | 17:57 |
benschubert | it's the event loop | 17:58 |
tristan | I guess this is more user friendly to the frontend | 17:58 |
tristan | I think the frontend *does* handle it in some cases, but only when interacting with the user using Click and the click.Abort exception | 17:58 |
benschubert | Let me do two more commits I think I have a good solution actually :) | 17:58 |
benschubert | yeah, frontend handles | 17:58 |
benschubert | but loop catches | 17:58 |
tristan | I think you're on the right track | 17:59 |
benschubert | so loop needs to bubble it up | 17:59 |
tristan | hmmm, /me not getting this unicode API | 18:01 |
tristan | I'm not sure that cython header file really means "UTF-8 encoded string" when it says so | 18:04 |
benschubert | If someone has time, I've got a few MRs that could have reviews: | 18:05 |
benschubert | https://gitlab.com/BuildStream/bst-plugins-experimental/-/merge_requests/123 (ostree rewrite) | 18:05 |
benschubert | https://gitlab.com/BuildStream/bst-plugins-experimental/-/merge_requests/124 (quick cleanup on CI) | 18:05 |
benschubert | https://gitlab.com/BuildStream/buildstream/-/merge_requests/1984 (quick cleanup) | 18:05 |
tristan | crap | 18:15 |
* tristan is using PyUnicode_AsUnicode, PyUnicode_GetSize to build the buffer with memcpy, and PyUnicode_FromUnicode() in the end keeps throwing me invalid unicode bytes | 18:17 | |
tristan | ValueError: character U+abf7b198 is not in range [U+0000; U+10ffff] | 18:17 |
tristan | Also, https://github.com/cython/cython/blob/master/Cython/Includes/cpython/unicode.pxd is not the same as .tox/.package/lib/python3.7/site-packages/Cython/Includes/cpython/unicode.pxd | 18:18 |
tristan | PyUnicode_GetSize() says "Return the length of the Unicode object.", this does not specify in code points or bytes either | 18:19 |
benschubert | :/ | 18:20 |
* tristan needs to find out what those map to and how | 18:21 | |
tristan | maybe it's not all installed into .tox | 18:21 |
benschubert | tristan: https://gitlab.com/BuildStream/buildstream/-/merge_requests/1985 ready! if you have time to review, it would be great to get that in | 18:21 |
benschubert | I'm going to eat, and will adress your comments after if you had time :) | 18:22 |
tristan | Yeah, it's looking great overall :) | 18:31 |
tristan | What a relief :) | 18:31 |
tristan | I won't be around when you come back | 18:32 |
benschubert | tristan: haha I'm not away yet, but great thanks :) | 18:32 |
tristan | at least if I am, I will feel quite embarrassed about it, and be late meeting people tomorrow because of it :) | 18:32 |
benschubert | I'll add the nit | 18:32 |
benschubert | don't overstay :) thanks for the view, I'll address the nits and once I've got an approval I;ll merge | 18:32 |
benschubert | actually, if you're fine with it and I address your comments I guess I can merge as is? | 18:33 |
tristan | I think so yes | 18:33 |
tristan | benschubert, Just try interacting with it first | 18:33 |
tristan | You probably did | 18:33 |
tristan | but check the usual suspects, ^C, terminate, `kill -TERM` | 18:34 |
tristan | ^Z | 18:34 |
tristan | If you don't see any obvious regressions, let's just land it | 18:34 |
benschubert | okay! thanks | 18:36 |
tristan | maybe instead of constructing a list, I could make an iterator for Value to pass to "".join() | 18:37 |
tristan | Then we reuse join on python str objects without the overhead of creating a list or appending strings to it | 18:38 |
tristan | Hmm | 18:38 |
* tristan will give it a shot tomorrow-ish | 18:38 | |
tristan | tomorrow real life kicks in for a day | 18:38 |
*** benschubert has quit IRC | 23:09 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!