*** tristan has quit IRC | 05:55 | |
*** tristan has joined #buildstream | 06:11 | |
*** ChanServ sets mode: +o tristan | 06:11 | |
*** jude has joined #buildstream | 07:34 | |
*** jude has quit IRC | 07:41 | |
*** albfan[m] has left #buildstream | 08:12 | |
*** tlater has joined #buildstream | 08:20 | |
juergbi | tristan: _yaml.dump() doesn't appear to be able to handle ChainMap (public data). is this expected? | 08:21 |
---|---|---|
*** jude has joined #buildstream | 08:26 | |
tristan | hmmm | 08:33 |
tristan | juergbi, it looks straight forward actually, that is confusing | 08:34 |
tristan | it means ruamel.yaml doesnt handle it well | 08:34 |
juergbi | i get ruamel.yaml.representer.RepresenterError: cannot represent an object: ChainMap({'bst': ChainMap({'... | 08:35 |
tristan | juergbi, on the other hand, anything that passes through node_sanitize() should | 08:35 |
tristan | it will create ordered dicts, though | 08:35 |
tristan | juergbi, ok so, probably we need a utility to dump in a cleaner way; it would be nice to have a dumper that understands ordered dicts but dumps as regular dicts, in the order of the OrderedDict | 08:37 |
tristan | there is some code here and there for that I can point to | 08:37 |
tristan | it involves adding a representer class, and is different for python2 vs python3... | 08:37 |
*** jonathanmaw has joined #buildstream | 08:38 | |
tristan | juergbi, https://gitlab.com/BuildStream/defs2bst/blob/master/defs2bst/bstDumper.py | 08:38 |
tristan | juergbi, there is the dict representer there, which will determine the order in a hard coded way | 08:39 |
tristan | juergbi, the rest of the code there ensures that strings are encoded in a nice human readable way (instead of quoted with embedded \n characters) | 08:39 |
juergbi | ok, ta | 08:40 |
juergbi | btw: i was able to push a few artifacts to the new server. average upload speed was pretty low, though, not sure why | 08:42 |
juergbi | the current setup with separate sshfs connection for each pull/push is definitely not ideal, though. it might be more stable with manually mounted sshfs and then specifying the local path in buildstream.conf | 08:44 |
juergbi | to support artifact availability checks the plan is anyway to mount a single sshfs for the whole session, i.e., we're moving towards that (except that we won't have to manually mount it) | 08:45 |
*** tiagogomes has joined #buildstream | 08:47 | |
tristan | juergbi, yeah, lets not introduce some half way thing where the user has to do a manual sshfs, then | 08:48 |
tristan | juergbi, I'm also struggling with this btw, it's not working at all here | 08:48 |
tristan | So far, locally I've added an 'if mountpoint "${remote_repo}"' clause to the cleanup handler | 08:49 |
juergbi | manual sshfs already works by virtue of supporting local repos but sure, it's not something we should recommend | 08:49 |
tristan | So that it doesnt leave hundreds of ostree-push.XXXXX directories in CWD | 08:49 |
tristan | Next, I'm added an optional graceful termination to utils._call(), since we currently murder with SIGKILL | 08:50 |
tristan | So I can add a trap for SIGTERM to your shell scipts | 08:50 |
tristan | again so that it doesnt litter temp dirs all over the place when forceful termination is employed | 08:50 |
tristan | Next, I will be adding an option to give the scripts a base directory, so that we can do those temp dirs in ~/.cache/buildstream/artifacts, and not litter the CWD at all | 08:51 |
tristan | Another thing I'm noticing, is that the huge amount of time it takes for me to fail to pull the artifacts... is blocking the build for some reason | 08:51 |
gitlab-br-bot | buildstream: merge request (tristan/test_tests->master: WIP: Add integration tests) #40 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/40 | 08:52 |
*** tiagogomes has quit IRC | 08:52 | |
tristan | So I dont know exactly why, but after failing to pull a base system, it does not start building it while attempting to pull later things | 08:52 |
juergbi | hm, because it shares sched.fetchers with source fetching? | 08:54 |
juergbi | pull attempt will hopefully be much faster with the persistent sshfs | 08:54 |
*** tiagogomes has joined #buildstream | 08:56 | |
tristan | Ohhh that could be the reason yes | 08:59 |
tristan | fetch queue starvation | 08:59 |
tristan | juergbi, maybe we should reverse a loop in the scheduler so that it visits last queues first, then we would just have to ensure that Queue->ready() implementations make sense, though | 09:01 |
juergbi | yes, preferring later queues could make sense | 09:02 |
*** tiagogomes has quit IRC | 09:11 | |
*** tiagogomes has joined #buildstream | 09:12 | |
*** ssam2 has joined #buildstream | 09:12 | |
jonathanmaw | morning folks, I can't remember precisely what the resolution was, but I'm having an issue building my dpkg build elements. symlinks seem to be broken | 09:22 |
jonathanmaw | i.e. "# file /usr/bin/rst2man | 09:22 |
jonathanmaw | /usr/bin/rst2man: broken symbolic link to ../../../../../../../../../home/jonathanmaw/.cache/buildstream/build/debian-python-x9gp05kc/root/etc/alternatives/rst2man" | 09:22 |
*** violeta has joined #buildstream | 09:23 | |
tristan | jonathanmaw, ok I think there is still something broken | 09:26 |
tristan | jonathanmaw, does your build directory happen to be symlinked ? | 09:26 |
jonathanmaw | tristan: yes. I'll try running it without the symlink | 09:27 |
tlater | Is there a way to add flags to the git commands buildstream runs? I can't test git sources with a .git directory. | 09:27 |
tristan | jonathanmaw, utils.py:_relative_symlink_path() needs to be fixed, so that the realpath of the passed 'root' directory is used as well | 09:28 |
ssam2 | tlater, it's reasonably simple to run `git daemon` locally | 09:28 |
ssam2 | tlater, the Morph test suite did that (although it was written in shell so impossibly ugly), i'll see if I can find the example | 09:29 |
tristan | tlater, I'm not hugely concerned with the Source implementation tests to be honest, we have decent coverage for that in ./setup.py test | 09:29 |
jonathanmaw | tristan: yep, build passed when I used a config file to change the dirs to their real paths instead of the symlink | 09:29 |
tristan | jonathanmaw, can you fix it ? | 09:29 |
jonathanmaw | Is there a way to set config that doesn't require me to pass "--config", by the way? | 09:30 |
ssam2 | tlater, http://git.baserock.org/cgit/baserock/baserock/morph.git/tree/yarns/implementations.yarn#n460 for example [however, seems you don't need such a thing anyway thankfully] | 09:30 |
* tristan did it before, but messed up something and ended up reverting locally | 09:30 | |
tristan | jonathanmaw, https://buildstream.gitlab.io/buildstream/config.html#config | 09:30 |
tristan | jonathanmaw, the default is $XDG_CONFIG_HOME/buildstream.conf, you only need --config to load an alternative one | 09:31 |
jonathanmaw | ta tristan | 09:32 |
jonathanmaw | as for the _relative_symlink_path(), I'll make an issue and assign it to myself. | 09:33 |
tlater | Hm, yeah, let's not I suppose. Can I also ignore the ostree source test? It fails exclusively on the CI servers due to some tracking issue I can't reproduce locally. | 09:34 |
jonathanmaw | though I'd like to get my dpkg elements back into a usable state before I dive into that problem. | 09:34 |
tristan | jonathanmaw, ok just create an issue, I'll try to find a moment to fix that since anyway I seem to be picking up pieces left right and center anyway | 09:39 |
tristan | tlater, ignore... | 09:39 |
gitlab-br-bot | buildstream: issue #44 ("Symlinks in the sandbox are broken by the path to the buildstream cache containing symlinks") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/44 | 09:39 |
tristan | tlater, can you change the tests so that the output is in the main window ? | 09:40 |
tristan | tlater, I'd like to see the bst output in there too, and not have it redirected | 09:40 |
tristan | tlater, I'm looking at this for instance: https://gitlab.com/tlater/buildstream-tests/-/jobs/20640091 | 09:40 |
tristan | Also interestingly, it means we can enable ANSI color codes in gitlab runners, YAY ! | 09:41 |
tristan | we can add an option to bst to forcefully put the color codes even when not connected to terminal | 09:41 |
tristan | very easily | 09:41 |
tlater | Alright, can do | 09:42 |
tlater | Should I tee the output or stop using log files? | 09:42 |
tristan | tlater, think we are close to a merge ? I'm looking at that branch and it looks like a lot of stuff was removed (in the changes of the MR) | 09:42 |
tristan | dont really understand what's going on there | 09:42 |
tristan | tlater, I think it's fine to just stop using log files | 09:42 |
tristan | tlater, in any case this is something we run, and see the output | 09:43 |
tlater | I prepared both merge requests, they can be merged. I removed a few tests that I'm still debugging locally so we can get this working. | 09:43 |
tristan | on a CI it's very annoying to try to find out how one could possibly shell in and look at a log file | 09:43 |
tristan | and running locally, doesnt really matter | 09:43 |
tristan | ok good news | 09:43 |
tlater | This doesn't include the cache yet, I think I got that working finally... But besides building forever the tests should work. | 09:44 |
tristan | juergbi, you said two things, 1.) You were able to push some artifacts... and 2.) a single stable mount would be nice to have, and supported if you kind of fool your own setup and mount first | 09:52 |
tristan | juergbi, but were you able to push (or pull) any artifact without fooling your setup into thinking it's local ? | 09:53 |
juergbi | yes, push did work for me with SSH in the buildstream config, but only for a few small artifacts | 09:54 |
juergbi | also, i'm using SSH control masters for all connections (via .ssh/config) which means that all sshfs instances end up as a single SSH/TCP connection | 09:55 |
juergbi | (i only tested with a few small artifacts, i didn't see it fail with larger ones) | 09:57 |
tristan | I see | 09:57 |
tristan | So, knowing about control masters and .ssh/config is not really a sane option for users | 09:58 |
tristan | I dont really want to even know what those are :-/ | 09:58 |
juergbi | this will also be irrelevant with the mentioned plan of having buildstream perform a single mount for the whole session | 09:58 |
juergbi | which is essentially required for the artifact availability checks we want | 09:59 |
juergbi | in hindsight, maybe i should have directly implemented that, instead of the all-in-one shell scripts | 09:59 |
tristan | I'm having a very hard time terminating these jobs gracefully fwiw | 10:00 |
juergbi | SIGTERM isn't helping? fuse can be a pain with regards to this at times | 10:01 |
tristan | juergbi, I handle SIGTERM in the script, then I have tried either SIGTERM or SIGKILL to sshfs... but by the time sshfs exits (in the script), then fusermount does not succeed in the unmount | 10:03 |
tristan | that is, after adding an option for try SIGTERM from utils.py | 10:03 |
juergbi | hm, fusermount -u is supposed to terminate sshfs on its own | 10:04 |
tristan | juergbi, even before sshfs returns ? | 10:04 |
tristan | sshfs runs (for a long time), and after that there is an ssh -x process, which fusermount -u will terminate | 10:05 |
juergbi | you mean just establishing the connection takes a long time for you? | 10:05 |
tristan | a very, very long time | 10:05 |
tristan | over 5 seconds | 10:06 |
tristan | can be more | 10:06 |
juergbi | it's definitely sub-second here, but that's with control master | 10:06 |
juergbi | over 5 seconds sounds unusably slow even for regular connections, though | 10:07 |
tristan | yes, it's horrible :) | 10:10 |
juergbi | 0.17s here even with ControlPath none | 10:10 |
juergbi | ah, no, that was somehow still using the control master. it's 1s here without control master | 10:11 |
tristan | how do I configure that thing ? | 10:12 |
tristan | ControlMaster auto ? | 10:12 |
tristan | as a sub config of gnome7 ? | 10:12 |
* tristan notes a side effect of dumping these tempdirs in the CWD is that ls blocks indefinitely while this long mount process is taking place | 10:13 | |
juergbi | ControlMaster auto | 10:13 |
juergbi | ControlPath /run/user/1001/ssh-%r@%h:%p | 10:13 |
juergbi | ControlPersist 1h | 10:13 |
juergbi | is what i'm using. you have to adapt the path to something that works for you | 10:13 |
juergbi | i use it under 'Host *' but you can also use it just for gnome7, of course | 10:14 |
tristan | Ah I see | 10:16 |
tristan | I'm getting constantly banned | 10:16 |
tristan | because my config is screwed | 10:16 |
tristan | still | 10:16 |
tristan | the incorrect config presents an opportunity to handle the case of gracefully terminating while sshfs is running, which seems impossible | 10:17 |
ssam2 | i've had issues of sshfs hanging my machine in the past | 10:19 |
ssam2 | which I mitigated a bit by adding "ServerAliveInterval 15" into my ~/.ssh/config | 10:19 |
juergbi | it seems odd to me that sshfs wouldn't terminate (with non-0 exit code) in case ssh connection fails | 10:19 |
ssam2 | maybe something like `-oServerAliveInterval=1` on the sshfs commandline would help ? | 10:20 |
juergbi | not sure whether that would change anything if connection is not established yet | 10:20 |
ssam2 | ah maybe not | 10:20 |
juergbi | if the server is silently dropping packets, huge TCP timeouts might be the issue here | 10:21 |
juergbi | there is a ConnectTimeout option | 10:21 |
tristan | What has been happening is that my config was incorrect, so I was missing artifacts@ | 10:25 |
tristan | But interestingly, sshfs takes a very long time to deny me, and when it's terminated during that process, it leaves me with a mount point | 10:25 |
tristan | which fusermount -u (called directly after waiting for the sshfs pid to be reaped) fails to unmount | 10:26 |
tristan | after a short timeout, I am able to cleanup the mounts | 10:26 |
tristan | (manually) | 10:26 |
tristan | with the correct configuration... it's maintaining the connection and mounting much more quickly | 10:27 |
tristan | however, it still leaves stale mounts behind on termination | 10:28 |
*** jonathanmaw has quit IRC | 10:31 | |
*** jonathanmaw has joined #buildstream | 10:31 | |
tristan | Ok, I have it... FINALLY ! | 10:36 |
tristan | Actually a sleep between two fusuermount -u calls, works | 10:36 |
tristan | which is, horrible | 10:36 |
tristan | but works. | 10:36 |
* tristan notes the lack of a --pushers argument to bst matching --fetchers and --builders | 10:36 | |
juergbi | oh, i missed that? :/ | 10:38 |
tristan | whew... so eek | 12:22 |
tristan | I have been pushing base-system.bst artifact for 25min | 12:23 |
tristan | my system monitor says I'm uploading about 3/4k | 12:23 |
tristan | per second | 12:23 |
tlater | I just successfully added caching to the CI \o/ Builds take 5 minutes instead of 20 now. | 12:27 |
tristan | Nice | 12:27 |
tristan | crap, it's still not cleaning up | 12:37 |
tristan | terminate pull/push and leaves many mounts lying around... what did I do | 12:37 |
tlater | Hmm... I'm getting fuse errors | 12:56 |
tristan | tlater, BUG ? | 12:57 |
tristan | tlater, with latest master ? | 12:57 |
tlater | No, definitely not latest master. It may be related to docker, trying a fresh image... | 12:58 |
tristan | I recently added a check to the fuse mounts to see if the child fuse process crashed | 12:58 |
tristan | without that, I get a strange condition (very rarely), that after an unmount, creating the directory (which already exists) fails | 12:59 |
tristan | and then checking that directory says transport endpoint not connected | 12:59 |
tlater | Might have to rebase then, and figure out why this happens. It would explain why my tests kept failing out of nowhere. | 12:59 |
tristan | I *think* this means the SafeHardlinks layer crashed, so I now added a check | 13:00 |
tristan | still I think we dont get a traceback for a crash in the fuse layer (which sucks, yeah) | 13:00 |
tristan | maybe we need pid based log files for fuse layers so that we catch tracebacks from that subprocess somehow | 13:00 |
tristan | that, or something complex like we do in the scheduler already, to send the traceback to the parent as a string; but somehow I think that wont work if we're going python -> libfuse -> back to python | 13:01 |
tlater | Should I make the tests fail on the first failing test or let them all complete? | 13:05 |
tristan | tlater, we want to fail if any of the tests fail, but we want to run them all | 13:08 |
tristan | so the latter | 13:08 |
tristan | better to know that 5 tests fail, than take the time to fix one and push a new branch to see 4 more failures (even when you know that fixing that one thing will cause everything else to pass) | 13:08 |
tlater | Alright, makes sense. | 13:09 |
tristan | Ok so, I've pushed some improvements to ostree push/pull process termination | 13:09 |
tristan | But I have yet to successfully push an artifact | 13:09 |
tristan | I think it might take all day to push one artifact | 13:09 |
tristan | I will try letting it run overnight | 13:10 |
tristan | see what happens | 13:10 |
gitlab-br-bot | push on buildstream@master (by Tristan Van Berkom): 8 commits (last: ostree-pull-ssh, ostree-push: Dont try to unmount not-mounted directory) https://gitlab.com/BuildStream/buildstream/commit/4f3dcabd9c3abd8c992ca0d824aec45f722c6e06 | 13:10 |
juergbi | it was also not extremely fast for me but it averaged at about 500 kbps (with large variations, though), i.e., still a lot faster than what you're seeing | 13:12 |
gitlab-br-bot | push on buildstream@master (by Tristan Van Berkom): 1 commit (last: main.py: Added --pushers argument) https://gitlab.com/BuildStream/buildstream/commit/17f8bf80403b02262a35362da80a0ec379425eda | 13:13 |
tristan | juergbi, yeah, the 350ms ping from here to the UK might cause it to be a bit slower | 13:14 |
tristan | still, I have good bandwidth :-/ | 13:15 |
tristan | so it should appear like a local operation to ostree... and only ssh does the networking... not sure why this is so slow | 13:16 |
juergbi | maybe there are too many roundtrips, not sure | 13:16 |
tristan | there certainly are probably a lot, I think ostree does per file operations | 13:16 |
tristan | and we're talking about like, I think 1.5GB of relatively small files | 13:16 |
tristan | Probably also separate operations for reading file attributes and the like | 13:17 |
tristan | maybe some threading inside ostree itself could speed this kind of thing up | 13:17 |
tristan | dunno | 13:17 |
tlater | The fresh docker image also has fuse issues. It seems to happen when running ldconfig after importing the base sdk. | 13:18 |
juergbi | will have to trace what filesystem operations are used | 13:18 |
tlater | Ugh, how should I debug this? And ssam2, do you happen to know if docker needs to do anything special to support fuse? | 13:25 |
gitlab-br-bot | buildstream: merge request (jonathan/dpkg-build->master: WIP: Jonathan/dpkg build) #37 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/37 | 13:29 |
ssam2 | tlater, not sure about that | 13:34 |
ssam2 | https://github.com/moby/moby/issues/514 | 13:35 |
ssam2 | maybe you need to enable CAP_SYS_ADMIN ? makes sense it would be a privileged op | 13:35 |
*** tristan has quit IRC | 13:35 | |
tlater | So do I have to recompile docker locally for this? | 13:36 |
tlater | Ah, no, looks like they added a --privileged flag. | 13:37 |
tlater | Docker is probably running with that on the CI servers, hence I never saw the fuse error there. | 13:37 |
ssam2 | yes, i've just checked on the machine and they are setting privileged=true in the config there | 13:39 |
tlater | Cool, thanks :) | 13:39 |
tlater | Alright, so what seems to be happening is that some directories are owned by a different user, and root doesn't have privileges to write to those. That results in autotools being unable to write, and failing to build. | 13:52 |
tlater | Since when can a user prevent root from writing to their files? | 13:52 |
*** tristan has joined #buildstream | 13:59 | |
*** jude has quit IRC | 14:04 | |
tlater | tristan: This is indeed a "BUG" problem :/ | 14:05 |
tristan | tlater, so what does it say ? | 14:05 |
tlater | 'buildstream._fuse.mount.FuseMountError: SafeHardlinks reported exit code 1 when unmounting' | 14:06 |
tristan | Ah, ok interesting | 14:06 |
tristan | tlater, are you saying you can reproduce that consistently ? | 14:06 |
tristan | with a specific test ? | 14:06 |
tlater | Yup | 14:06 |
tlater | I could push a docker image with this. | 14:06 |
tristan | tlater, and can you reproduce it on your machine ? or _only_ on gitlab ? | 14:06 |
tlater | Well, only on this docker image | 14:07 |
tristan | oh... so I have to learn what is docker :) | 14:07 |
tristan | hahahaha | 14:07 |
tristan | the dreaded day has come ! | 14:07 |
tlater | Heh | 14:07 |
tristan | tlater, it happens for both cmake and autotools tests right ? | 14:08 |
tlater | I haven't properly tested cmake yet | 14:08 |
tlater | But it's probably the same issue | 14:08 |
tlater | I'll try running the test for that too... | 14:08 |
tristan | tlater, anyway this is good news to me, because I needed to reproduce that thing, we just cant know for sure if it's the same case I've been seeing | 14:08 |
tristan | unfortunately | 14:09 |
tlater | If there's a docker repo somewhere I could push the image there | 14:09 |
tlater | For you to have a look | 14:09 |
tristan | tlater, I wont be fixing this tonight... if you want to try to fix it; the problem is most certainly in _fuse/hardlinks.py | 14:09 |
tristan | tlater, if you have other things you can do to tidy up these tests, do that instead, but I think you are blocking on this | 14:10 |
tlater | Ok, though I think that's probably a bit beyond what I can do quickly. | 14:10 |
tristan | Well, somebody probably has to do it quickly :D | 14:11 |
tristan | hehehe | 14:11 |
tristan | tlater, ok tell you what... first prepare the MR with those failing tests disabled (I think you already have that or very close to it)... | 14:11 |
tlater | Yeah, I'll add a disable command. That should probably take an hour or two anyway. | 14:12 |
tlater | I'll also make sure I can reproduce this, and that it's not some magical docker flag I need to set. | 14:12 |
tristan | If you have time, just try to find out what's causing it to crash; the first step would be to redirect the stdout/stderr of the fuse mount process to some file, so hopefully you will just see a traceback, and then palmface, and immediately see what is going wrong | 14:12 |
tristan | (you can send me the palmface by email and I will apply that to myself locally) | 14:13 |
tlater | ;P | 14:13 |
*** jude has joined #buildstream | 14:13 | |
tristan | _fuse/mount.py does the magic of mounting fuse layers... and it's with multiprocessing... I'm not entirely sure how to redirect that subprocesses stdout/stderr to a log, though :) | 14:14 |
tlater | Hmm... Nevermind, I got a different log. Perhaps I still was in a container without the permission flag? | 14:16 |
tristan | So cannot reproduce ? | 14:17 |
tristan | tlater, I have a "hunch", look at _hardlinks.py:92 | 14:18 |
tlater | Nope. Not in a new container -.- | 14:18 |
tristan | shutil.copy2 | 14:18 |
tristan | There are some strange cases where the build sandbox and ostree were able to set some xattrs | 14:18 |
tristan | but copy2 gets PermissionError raised when it attempts to copy them | 14:18 |
tristan | It can be that that is happening | 14:18 |
tlater | The permission errors happen in autotools | 14:19 |
tristan | In an ultimate future, we will be virtualizing the xattrs (and other things) with another fuse layer | 14:19 |
tlater | That should be after this, right? | 14:19 |
tristan | I dont know what is before or after what | 14:19 |
tristan | Or what "The permission errors" are | 14:19 |
tristan | Only if you tell me very specific things, I know what you are talking about :D | 14:19 |
tlater | I'm trying to build a test "package" with autotools. | 14:20 |
tristan | "package" ? | 14:20 |
tristan | artifact, maybe ? | 14:20 |
tlater | amhello, just a little source that writes "Hello world" in C. | 14:20 |
tlater | artifact, in bst nomenclature, yeah. | 14:21 |
tristan | mkay | 14:21 |
tlater | autotools gets permission denied errors when it tries to write something | 14:21 |
tlater | Checking the shell that bst leaves me in, the directory it tries to write to is owned by UID 65543 | 14:22 |
tristan | I'm sure something more specific than "autotools" gets a permission denied error | 14:22 |
tlater | And for some reason root can't write to it. | 14:22 |
tlater | automake: error: cannot open > src/Makefile.in: Permission denied | 14:22 |
tlater | That's the most specific error I get :/ | 14:22 |
tristan | automake | 14:22 |
tristan | Ok, so when running automake | 14:23 |
tristan | it's generating files, in the source directory | 14:23 |
tristan | Which should not be a fuse mount actually, just a regular mount | 14:23 |
tristan | read-write, with read-only root | 14:23 |
tristan | tlater, note that root is not root | 14:23 |
tristan | tlater, in a sandbox, root is the active user | 14:24 |
tlater | Alright, that makes more sense | 14:24 |
tlater | So automake should never write to that directory? | 14:24 |
tristan | And the directory src/ is owned by something else ? | 14:24 |
tlater | Yeah | 14:24 |
tristan | and the parent directory of that ? | 14:24 |
tristan | is owned by what ? | 14:24 |
tlater | root | 14:24 |
tristan | Aha | 14:25 |
tristan | and this is a tarball ? | 14:25 |
tlater | Lemme check, I think so | 14:25 |
tristan | that you unpacked into there ? | 14:25 |
tlater | Do tarballs keep permissions? | 14:25 |
tristan | tlater, I think this is what is happening... | 14:25 |
tlater | Not manually | 14:25 |
tristan | yes they keep everything | 14:25 |
tlater | That makes a lot of sense | 14:25 |
tristan | tlater, you are running buildstream "as root" in the runner | 14:25 |
tristan | so when buildstream unpacks a tarball... | 14:25 |
tristan | the file ownerships are retained | 14:26 |
tristan | 65543 was perhaps the UID of the user who created that | 14:26 |
tristan | tar -tvvf tarball.tar | 14:26 |
tristan | should tell you what is the UID of files inside it | 14:26 |
tristan | tlater, a regular user untarring, cannot create files under arbitrary UIDs | 14:26 |
tlater | Alright, yeah, I understand. Damnit, so much time wasted -.- | 14:26 |
tristan | and tar just assumes it's ok to use the effective UID | 14:27 |
tristan | tlater, probably we need to change the tar source to force that behavior | 14:27 |
tristan | so this doesnt happen when you try things as root | 14:27 |
tlater | Ok, but how do I set permissions such that I can build this as an arbitrary user?# | 14:28 |
tristan | the tar source uses python to unpack | 14:29 |
tristan | you will want geteuid (effective uid) | 14:29 |
tristan | tlater, and you will want a callback for the unpacking of tarfiles which lets you set attributes, like uid/gid to geteuid and getegid | 14:29 |
tristan | tlater, it's somewhere there in the spaghetti of TarFile documentation | 14:30 |
tlater | Ok, cool. Well, at least we know these tests are actually failing for a reason. | 14:30 |
tristan | ok... this one shouldnt be hard to fix :) | 14:31 |
tlater | I'll add the ignore command, update the MR and see if I can fix this. | 14:31 |
tristan | Here is a hint: https://github.com/gtristan/ybd/blob/aboriginal/ybd/virtfs.py | 14:32 |
tristan | But I think there is an easier API than looping through all the TarInfo objects and extracting each one manually | 14:32 |
tristan | there is a filter callback you can use when doing one of the hightlevel TarFile.extractall() APIs or such | 14:33 |
tristan | which would be cleaner | 14:33 |
tlater | Ok, I'll certainly have a look through the documentation. | 14:33 |
tlater | Thanks :) | 14:34 |
tristan | Thank you ! | 14:38 |
tristan | tlater, let me know if you find a way to reproduce: 'buildstream._fuse.mount.FuseMountError: SafeHardlinks reported exit code 1 when unmounting' | 14:38 |
tristan | I'm hunting that one :) | 14:38 |
tlater | I will, possible I just hosed my only reference to that though. | 14:39 |
gitlab-br-bot | buildstream: merge request (jonathan/dpkg-build->master: WIP: Jonathan/dpkg build) #37 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/37 | 15:06 |
*** jude has quit IRC | 15:18 | |
*** jude has joined #buildstream | 15:28 | |
*** jonathanmaw has quit IRC | 16:54 | |
*** tristan has quit IRC | 16:54 | |
*** tiagogomes has quit IRC | 16:59 | |
*** jude has quit IRC | 16:59 | |
*** ssam2 has quit IRC | 16:59 | |
*** violeta has quit IRC | 16:59 | |
*** tlater has quit IRC | 16:59 | |
*** violeta has joined #buildstream | 17:10 | |
*** jude has joined #buildstream | 17:10 | |
*** ssam2 has joined #buildstream | 17:10 | |
*** tiagogomes has joined #buildstream | 17:13 | |
*** tristan has joined #buildstream | 17:27 | |
*** ssam2 has quit IRC | 18:42 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!