*** tristan has joined #buildstream | 07:27 | |
*** ChanServ sets mode: +o tristan | 07:28 | |
*** bochecha_ has joined #buildstream | 08:18 | |
*** bochecha_ has quit IRC | 08:19 | |
gitlab-br-bot | push on buildstream@pytest-version (by Mathieu Bridon): 1 commit (last: Specify the minimum version of pytest required) https://gitlab.com/BuildStream/buildstream/commit/5633dedbcb2b0c27cfbe6edeac6ece015ece2a60 | 08:21 |
---|---|---|
tristan | hah | 08:21 |
gitlab-br-bot | push on buildstream@pytest-version (by Mathieu Bridon): 1 commit (last: Specify the minimum required version of pytest) https://gitlab.com/BuildStream/buildstream/commit/0a4f05f6a0486d6a58a83d50e9b607ddb747befe | 08:21 |
tristan | I was just doing that | 08:21 |
gitlab-br-bot | buildstream: merge request (pytest-version->master: Specify the minimum required version of pytest) #94 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/94 | 08:22 |
*** jonathanmaw has joined #buildstream | 08:30 | |
gitlab-br-bot | buildstream: merge request (pytest-version->master: Specify the minimum required version of pytest) #94 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/94 | 08:39 |
gitlab-br-bot | push on buildstream@master (by Tristan Van Berkom): 1 commit (last: Specify the minimum required version of pytest) https://gitlab.com/BuildStream/buildstream/commit/0a4f05f6a0486d6a58a83d50e9b607ddb747befe | 08:39 |
gitlab-br-bot | buildstream: Mathieu Bridon deleted branch pytest-version | 08:39 |
*** tristan has quit IRC | 08:43 | |
*** ssam2 has joined #buildstream | 08:53 | |
*** paulsherwood has joined #buildstream | 09:11 | |
* paulsherwood finally fixes his irc | 09:12 | |
paulsherwood | what have i missed? :) | 09:12 |
*** tlater has joined #buildstream | 09:18 | |
ssam2 | :-( buildstream seems to have lost the ability to check that it can push to the artifact cache before pushing | 09:20 |
ssam2 | i just noticed my push job has been running for 10 minutes and is just failing repeatedly | 09:21 |
ssam2 | oh, maybe the issue is that `bst push` never did this | 09:25 |
*** tristan has joined #buildstream | 09:27 | |
tristan | tlater, ok so regarding artifact caches; before I start rambling; you might want to ask what specifically you wanted to talk about | 09:41 |
tristan | Cause I could go in another direction and fail to answer your question :) | 09:41 |
tlater | tristan: Well, when we originally discussed this branch, we left remote caches as a future discussion | 09:41 |
tlater | You brought up kbas as well as just hosting tarballs inside an ostree repo. | 09:42 |
tristan | Right, ok so that remains unimplemented in the fallback unix platform | 09:42 |
tlater | Yup | 09:42 |
tlater | Essentially, the question is, which way are we moving, and when should this be done. | 09:43 |
tristan | I never meant to host tarballs inside an ostree repo, but the most attractive conclusion to this whole thing afaics, will probably be to have non-ostree-supporting platforms; *speaking* to ostree artifact caches, in tar language | 09:43 |
tristan | So yeah there are few questions to sort out: | 09:44 |
tristan | o Can the unix platform land without remote artifact cache support first ? | 09:44 |
tristan | o Even on linux, our configuration surface for artifact caches sucks, it lets the user do stupid things like setting the push/pull urls of the same artifact cache to different locations | 09:45 |
tristan | o Can we make a statement that the configuration of artifact caches, and artifact cache implementations themselves, are subject to change at least in the initial release cycles, until we declare that stable ? | 09:46 |
tristan | So the history of this was: We didnt want anything else - and we very much desired to *not* have a remote artifact cache implementation | 09:47 |
paulsherwood | ? | 09:47 |
tristan | In other words, ostree with absolutely no weirdness and hackery around it, should have been enough. | 09:47 |
tristan | We wouldnt have had to implement anything at all to run somewhere else, that someone would have to install | 09:47 |
paulsherwood | oh. maybe. | 09:47 |
tristan | But this is becoming false because of fallback unix platforms, and because it looks like we are a far cry from upstreaming the ostree-push work | 09:48 |
paulsherwood | assuming your ostree idea works... from a user perspective on that... | 09:48 |
paulsherwood | (without having ostree on my unix/mac/windows whatever) | 09:49 |
tristan | paulsherwood, not sure I follow that last... do you mean 'speaking tar to an ostree cache' ? | 09:49 |
paulsherwood | yup... let's imagine there is an ostree cache, and i'm using buildstream on a (population of) windows/mac/unix | 09:50 |
paulsherwood | - is there any ability to assess status of the ostree-cache-server (eg a web interface) | 09:50 |
paulsherwood | - any way to search for available artifacts? | 09:50 |
paulsherwood | - any way to download an individual artifact? | 09:50 |
paulsherwood | - any way to see how much space is available on the server machine? | 09:50 |
paulsherwood | - any way to prune/cull artifacts to free up space? | 09:51 |
tristan | To be honest, you came up with this web interface thing a few months ago, but I had never planned in any way to have users interact directly with a remote artifact cache, without going through the buildstream CLI to do so. | 09:51 |
tristan | Not that its a bad feature | 09:51 |
paulsherwood | tristan: it's not that 'i came up with it'... there are usecases where it's needed afaict... | 09:51 |
tristan | but I actually never even knew about it with ybd/kbas, or had any inclination to poke it | 09:51 |
tristan | But in any case, what I am thinking here is, we can have a more clean UX in general, and cleaner api surfaces, if we can only mandate that artifact shares be run on linux | 09:53 |
paulsherwood | also, an inherent/implied requirement is that when we change cache-key calculation, we continue to support old ones on this server to some extent | 09:53 |
tristan | Which I feel is not a huge problem | 09:53 |
* tristan thought he had filed a bug about that | 09:54 | |
paulsherwood | i'm ok with cache-server requiring linux. i'm ok with it requiring/being ostree... but it needs to be useable and the facilities i've highlighted above are required imo | 09:54 |
tristan | paulsherwood, there was discussion about that, not related to any "interfacing with artifact shares without using BuildStream to do so"... but | 09:55 |
tristan | We were thinking of hardening how artifacts are pulled and checked out, or at least have a method for checking out by artifact key | 09:55 |
tristan | In the sense that, only the keys listed in artifact metadata are followed | 09:56 |
tristan | (any no buildstream cache key calculation is used for a checkout) | 09:56 |
tristan | that would solve that, but is orthogonal to users interacting with artifact shares without using `bst foo` to do so | 09:56 |
tristan | (if it can be done one way, it can be done for both, anyway) | 09:56 |
tristan | paulsherwood, so to clarify; nice features for users to interact with a web server are cool, I like em :) ... that whole web server idea though is out of scope, resource wise | 09:58 |
paulsherwood | tristan: i'm just sharing actual usecases. these facilities got added to kbas in response to situations that have arisen | 09:58 |
paulsherwood | it doesn't need to be a web service | 09:58 |
paulsherwood | but as we've shown with kbas, it's not exactly hard to do it using a web service | 09:58 |
ssam2 | does kbas have those features listed above? how do i search for an artifact from http://artifacts1.baserock.org:8000/ ? | 10:00 |
paulsherwood | http://artifacts1.baserock.org:8000/* - or http://artifacts1.baserock.org:8000/linux* or http://artifacts1.baserock.org:8000/*foo* | 10:00 |
ssam2 | ah, ok | 10:01 |
paulsherwood | and the returned values are downloadable | 10:01 |
paulsherwood | (links) | 10:01 |
ssam2 | i'd say that's roughly comparable with what we already have at https://ostree.baserock.org/cache/refs/heads/baserock/ | 10:01 |
ssam2 | which is just the raw files on disk | 10:01 |
ssam2 | downloading won't work though | 10:01 |
tristan | Anyway building something nice on top of that is not difficult | 10:01 |
ssam2 | but you can download using the `ostree` CLI | 10:01 |
paulsherwood | ssam2: we have a live kbas containing .5million artifacts... is there a way of filtering that list? | 10:02 |
ssam2 | i'd use the ostree CLI for that | 10:03 |
paulsherwood | ack | 10:03 |
ssam2 | `ostree refs | grep ...` | 10:03 |
paulsherwood | fair enough | 10:03 |
paulsherwood | what about culling? | 10:04 |
tristan | So there are few threads spinning now... but yes culling | 10:04 |
tristan | So, at the conference we had some discussion, and the idea of multiple artifact caches came up | 10:05 |
tristan | From Juan and Angelos I believe (if I'm spelling names right) | 10:05 |
tristan | The idea being that, one should be able to setup production and development environments | 10:05 |
paulsherwood | not sure that multiple cashes is relevant. the basic use case is... i have a cache, it's running out of space. how do i keep it running | 10:06 |
tristan | So, one would branch a project | 10:06 |
tristan | paulsherwood, it is... | 10:06 |
tristan | One would branch a project | 10:06 |
tristan | And then create a separate artifact cache for those devs to push to | 10:06 |
tristan | But they would still fallback pull artifacts from mainline where they have not diverged | 10:06 |
tristan | The point being, development artifact caches can easily be nuked from existance | 10:06 |
paulsherwood | we're talking about different things. i have a cache. it's out of space. it needs to free up some space. preferably automatically, but at a pinch i need to run something | 10:07 |
tristan | And production artifacts contain every single thing which ever landed in production (stuff you dont want to delete). | 10:07 |
tristan | I.e., we're talking about the sane alternative to trying to cull something | 10:07 |
tristan | which doesnt really have a sensical culling strategy | 10:07 |
paulsherwood | i'm a user. i spun up this cache, i under-estimated how much space it needs. i'm on holiday. the cache runs out of space... | 10:08 |
tristan | For a local users artifact cache, perhaps a date based culling strategy is passable | 10:08 |
tristan | Because we assume that important things have anyway been pushed to a share | 10:08 |
paulsherwood | yes. let's assume that important things are kept. still... this cache is out of space. | 10:08 |
paulsherwood | (note, i think 'cache' has different properties from 'archive', 'backup' etc) | 10:09 |
tristan | Right, so this will be a good step forward, because there is *no way* to know which artifact is precious and which is not | 10:09 |
tristan | But the production/dev split, automatically provides this. | 10:09 |
paulsherwood | (in my world view a cache is a temporary thing, to improve performance) | 10:09 |
tristan | Ok I see what you mean, the share is not exactly like this though | 10:10 |
paulsherwood | i don't know what 'the share' means here | 10:10 |
tristan | However with the idea of multiple caches, you can kind of achieve both by saying, my devel share is a 'cache', while my production share is a 'store' | 10:10 |
paulsherwood | whatever :) i still have this situation, the 'cache' is out of space. what happens? | 10:11 |
tristan | Well, if its a devel share, you can just nuke it and nothing important is lost | 10:11 |
paulsherwood | i ask, because in actual experience, running out of space keeps being a common thing to kill services | 10:11 |
tristan | And rebuilding from scratch will also reuse the base parts of the system where your devel branch has not diverged | 10:12 |
paulsherwood | tristan: i'm on holiday, and/or it's 3AM my timezone... there are 500 engineers using the service, and it's suddenly out of space | 10:12 |
paulsherwood | are you saying i have to babysit the space on this thing? :-) | 10:12 |
tristan | We're certainly not there in terms of automating it no | 10:13 |
paulsherwood | can ostree actually be culled? | 10:13 |
tristan | So, Error, no space left on device; means your team of 500 engineers who suddenly lost an artifact share, basically cannot share new artifacts until someone kills the dev cache | 10:14 |
tristan | Which in the grand scheme of things, does not really prevent people from working | 10:14 |
tristan | But is a little inconvenient until monday yeah | 10:14 |
paulsherwood | tristan: if that's the outcome, ok. no possibility that running out of space kills the service? | 10:14 |
tristan | Yes ostree can be culled | 10:15 |
tristan | No | 10:15 |
tristan | No possibility of that really | 10:15 |
paulsherwood | you're sure? :) | 10:15 |
tristan | Well | 10:15 |
tristan | I really dont see how | 10:15 |
tristan | Note that a failed or interrupted artifact push will not leave garbage on the server | 10:16 |
tristan | So, if you want to push 1G and there is only 25MB, it will free up the 25MB after, but the pulls are anyway done over http | 10:16 |
paulsherwood | ack | 10:17 |
tristan | So yeah ostree repos can be modified and pruned and such, but pruning strategies of ostree assume a workflow we dont really have | 10:17 |
tristan | I.e. branches | 10:17 |
tristan | doesnt really make sense for strong cache key lookups, each ref is strong and separate, so in any case we would have to know what is precious and what is not | 10:18 |
tristan | All of this started I think with a practical intent to discuss what we have to do regarding artifact shares in order to land cross-platform branch right ? | 10:19 |
* tristan feels like we drifted miles away | 10:19 | |
paulsherwood | tristan: as i said, i believe the requirement is for a cache, not a store, so we don't really care about precious or not. precious can be handled in other ways | 10:20 |
tristan | In any case, what I ultimately *want*, is a single uniform way of configuring artifact shares, and running them on services | 10:20 |
paulsherwood | ack | 10:20 |
tristan | However if we want non-linux unix to have artifact shares *right now*, they will have to be different | 10:21 |
tristan | So there are few roads ahead, if we want tarball based artifact shares for non-linux right now; we run the risk that it will never change :) | 10:22 |
tristan | But, if we wanted, we could do it, and declare that artifact configurations may be subject to change... | 10:22 |
tristan | which means, at one point in the next year, we roll out something which requires people to reinstall artifact caches and reconfigure them, once | 10:23 |
tristan | So that they are all a uniform API surface, giving us some implementation freedom behind that API | 10:23 |
paulsherwood | i'm just trying to make sure that the artifact *cache* approach is actually workable. the main reason for cache is to reduce build/integration times | 10:23 |
paulsherwood | reinstalling/configuring occasionally is ok, so long as overall productivity is not badly impacted | 10:24 |
tristan | Right, I mean - I think that is not a very bad way to break API, so long as it really doesnt happen much | 10:24 |
tristan | It's not like breaking API on the actual YAML and stuff | 10:25 |
tristan | that stuff should be rock solid | 10:25 |
persia | I suspect some of the cache/store discussions are confusing things here. The build/integration speedup happens when one wishes to build an artifact that is already built. Ideally, this is pushed to one or more of the locations where BuildStream seeks to find prebuilt artifacts. Where space runs out from one of those, builds will be slower until a new cache is made available on which people can store things. | 10:25 |
tristan | persia++ | 10:25 |
persia | Whether the cache is "long term" (e.g. holds production artifacts), or "short term" (e.g. holds the last couple days of dev work), isn't that important. | 10:25 |
tristan | that would be the right now situation | 10:26 |
tristan | indeed, and I think the idea of being able to configure multiple lets you make that decision pretty nicely | 10:26 |
persia | tristan: That said, at some point it ought to be possible to set up a least-recently-used pruning mechanism for dev caches, etc. | 10:26 |
tristan | least-recently-used is far better than least-recently-created yes | 10:27 |
paulsherwood | is it only me who understands the term 'cache' expressly to be temporary, not long term? | 10:27 |
ssam2 | i agree on that meaning of 'cache', yeah | 10:27 |
paulsherwood | kbas does culls least recently used | 10:27 |
tristan | paulsherwood, I think it's only you who really minds about the terminology of it :) | 10:27 |
persia | paulsherwood: To explictly address the issue of you wanting to work with a cache I manage whilst I'm on the other side of the planet, off comms, and sleeping, you should be able to set up a temporary remote artifact store quickly and easily on your local cloud to use until I'm around and can help restore sanity. | 10:28 |
tristan | But ok, your point is that keeping precious things is not a feature of artifact caches | 10:28 |
paulsherwood | tristan: quite | 10:28 |
persia | tristan: Perhaps "precious" isn't the best word here: I think you mean "artifacts one has an expectation of needing to use for speed, even if they haven't been used that recently". | 10:29 |
paulsherwood | persia: it's 2017. having any manual step/process here should not be necessary | 10:29 |
tristan | paulsherwood, I cant say I'm personally satisfied with that as a user honestly, but in *any* case allowing multiple gives you a measure of flexibility, which allows keeping what you hold dear. | 10:29 |
persia | tristan: In practice, continuing to run CI builds of supported branches is probably the easiest way to prevent that sort of thing from LRU-expiring. | 10:29 |
persia | paulsherwood: Fair enoughl | 10:29 |
paulsherwood | tristan: we could move on to a discussion about how to store precious things. but as a cache service user, i just want my build done as fast as possible, and i scream every time something gets rebuilt that shouldn't have been needed | 10:34 |
tristan | :) | 10:34 |
* tristan stepped out for sec sorry... | 10:35 | |
tristan | So culling is going to be important for GNOME as well | 10:35 |
tristan | And unfortunately it's not easy to do | 10:35 |
tristan | afaics, you either have to have a weird hack to observe/snoop the http traffic related to the repo, or you really need a service. | 10:36 |
tristan | because otherwise the server has no way of knowing what was downloaded last | 10:36 |
tristan | would have to record that on disk in some way, and permission for uploading, generally requires higher clearance than downloading | 10:38 |
tristan | but downloading would now have to modify the disk | 10:38 |
paulsherwood | tristan: fwiw, the kbas approach is: on serving artifact, touch it. on uploading artifact, if not enough-space, cull lru to enough-space | 10:38 |
tristan | yeah I was imagining that | 10:38 |
tristan | We would have to touch perhaps, symbolic refs in another tree - but still security is weakened when compared with a completely read-only approach for downloads | 10:40 |
tristan | anyway, with an ultimately unified artifact cache, a service is at least required I think | 10:40 |
paulsherwood | kbas also uses a directory for each artifact file (to achieve atomicity and avoid races on clashing uploads), so the touch happens on the directory, not the file itself | 10:41 |
paulsherwood | i think we've done this to death for today :) | 10:41 |
* tristan hopes we dont have to get into salting/hashing user passwords and stuff :-/ | 10:41 | |
tristan | I guess that with a service, we could do auth-free downloads, and uploads over ssh and not have to handle auth ourselves | 10:43 |
paulsherwood | +1, except that there needs to be some auth for private caches | 10:44 |
tristan | right that could be an option when installing the artifact share on a server (and we would just do both over ssh I guess in that case) | 10:45 |
tristan | tlater, ok so; I think that *whatever* happens - we want to land cross platform branch without artifact share / kbas support | 10:48 |
tristan | tlater, just in the interest of breaking this down into chunks; the cross platform stuff is already an improvement without that feature in place | 10:49 |
tlater | tristan: makes sense | 10:49 |
tristan | tlater, probably the verdict is going to be that we have two different artifact sharing techniques first and then hopefully unify it some time in the following year | 10:49 |
tlater | Would an implementation that can handle tar caches be a blocker for 1.0? | 10:50 |
tristan | I dont think that's reasonable no | 10:50 |
tristan | I mean, remote shared caches of course | 10:51 |
tristan | although I dont think the 1.0 marker really changes anything here | 10:51 |
tristan | (lets put it this way; 1.0 is a statement about our first stable API - if the artifact sharing parts are not going to be stable API for now anyway; there is no reason to block) | 10:52 |
tristan | tlater, looking good btw, one test failing probably due to the sandbox not cleaning up devices it created last time around ? | 10:55 |
tlater | tristan: Perhaps, yeah, looking into it right now | 10:55 |
tlater | tristan: I'll have to ask you about the sandbox RO stuff in a minute too, just want to sort this out first | 10:56 |
tlater | Then I think we can land quite soon :) | 10:56 |
tristan | I have a little nitpic | 10:57 |
tristan | tlater, now we have _artifactcache/{abstract class, implementation1, implementation2} | 10:57 |
tristan | tlater, and we have _platform/{abstract class, implementation1, implementation2} | 10:57 |
tristan | tlater, it would probably be sensible to have the same for the sandbox | 10:58 |
tlater | Yup, probably a good idea | 10:58 |
tlater | Huh, oddly enough the test doesn't fail on my local machine | 11:01 |
gitlab-br-bot | push on buildstream@cross_platform (by Tristan Maat): 1 commit (last: _sandboxchroot.py: Remove devices present in a sysroot) https://gitlab.com/BuildStream/buildstream/commit/22b63bac82587c83e4c820b304273baaec1171f4 | 11:10 |
gitlab-br-bot | buildstream: merge request (cross_platform->master: WIP: Cross platform) #81 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/81 | 11:10 |
tlater | Alright, we'll see if that solves it in ~42 minutes :/ | 11:12 |
tristan | that long ? | 11:12 |
tristan | sometimes its slower than others | 11:12 |
tlater | The tar cache is a lot slower, unfortunately | 11:12 |
tlater | Especially ebcause it can't always hardlink | 11:12 |
tristan | ah | 11:13 |
tlater | On this discussion: https://gitlab.com/BuildStream/buildstream/merge_requests/81#note_40406413 | 11:13 |
tlater | I think I entirely misunderstand it, because I think it was already implemented like that - albeit with a different default | 11:13 |
tlater | (Which I changed, as well as making it a bit more public, but the underlying implementation remains the same) | 11:15 |
tristan | tlater, yeah it looks like you did that right; I however dont like this "trip read-write" thing approach that was there | 11:15 |
tristan | where was that | 11:16 |
tlater | tristan: I removed the trip read-write and replaced it with, well, the opposite | 11:16 |
tristan | But where were you calling this again ? | 11:16 |
* tristan searches the patch | 11:16 | |
tlater | In buildelement and scriptelement | 11:17 |
tristan | gitlab slow | 11:17 |
tristan | yeah but where, in configure ? | 11:17 |
tlater | Yup | 11:17 |
tlater | configure-sandbox | 11:17 |
* tristan thinks probably it's just the API name that doesnt sit well | 11:17 | |
tristan | ah so you made it set an attribute | 11:17 |
tristan | umm, do we do that anywhere else ? | 11:18 |
* tristan thinks at least a function call is in order | 11:18 | |
tlater | In all element classes now | 11:18 |
tristan | unless it's consistent with something ? | 11:18 |
tlater | tristan: No, it should be a function call | 11:18 |
tristan | tlater, where do we set attributes on other objects directly to tell them how to behave ? | 11:18 |
tristan | Ah, ok agreed. | 11:18 |
tlater | Not sure what I was thinking | 11:18 |
tristan | looking at the patch, you still have _runs_integration() | 11:20 |
tlater | tristan: Yeah, but that only states that the current element has integration commands | 11:20 |
tlater | It's a shortcut around sifting through public data | 11:20 |
tristan | yeah I know | 11:21 |
tristan | So I'm thinking what the right public API would be for this; or if it can be just a list comprehension copy/pasted everywhere | 11:21 |
tlater | It sounds like a big list comprehension | 11:22 |
* tlater is gone for a bit | 11:22 | |
tristan | Maybe | 11:22 |
*** tristan has quit IRC | 11:46 | |
*** tristan has joined #buildstream | 11:46 | |
*** ChanServ sets mode: +o tristan | 11:46 | |
*** tristan has left #buildstream | 11:46 | |
*** tristan has joined #buildstream | 13:29 | |
*** ChanServ sets mode: +o tristan | 13:29 | |
*** tristan has quit IRC | 13:40 | |
*** tristan has joined #buildstream | 13:44 | |
*** jonathanmaw has quit IRC | 14:32 | |
*** jonathanmaw has joined #buildstream | 14:34 | |
jonathanmaw | I'm getting the error "https://pastebin.com/mDfW19EP" when I run `bst --target-arch=armv8l64 build bootstrap/stage2-sysroot.bst` | 14:38 |
jonathanmaw | has anyone seen that before? it looks like something's gone very wrong with fuse | 14:38 |
jonathanmaw | currently trying the ppc64b version, to see if it's unique to armv8l64 | 14:41 |
ssam2 | at a glance it looks like something is assuming filenames are valid UTF-8, and is encountering a filename which is not | 14:44 |
ssam2 | so there are probably two issues, one that a codepath in buildstream is treating filenames as text rather than binary strings, and the other that something is producing weird non-utf8 filenames | 14:45 |
ssam2 | full logs would help, that traceback is only partial | 14:46 |
jonathanmaw | ssam2: the full log seems to be 5000 lines long | 15:01 |
jonathanmaw | ssam2: here it is, anyway https://pastebin.com/E3HuQuuv | 15:05 |
ssam2 | oh I have another idea what this might be | 15:10 |
ssam2 | oh maybe not | 15:11 |
ssam2 | fuse.py does already support aarch64 | 15:12 |
ssam2 | https://github.com/terencehonles/fusepy/blob/0eafeb557e0e70926ed9450008ef17057d302391/fuse.py#L212 | 15:12 |
ssam2 | double check that platform.machine() is returning 'aarch64' .. if not, fuse.py will be breaking | 15:12 |
ssam2 | in fact it does look a lot like a fuse.py bug | 15:12 |
ssam2 | as it occurs just as the integration commands are run | 15:13 |
ssam2 | which i think is the first time the FUSE SafeHardlinks filesystem kicks in | 15:13 |
jonathanmaw | ssam2: I'm currently running this on an x86 machine | 15:14 |
jonathanmaw | with --target-arch=armv8l64 | 15:14 |
ssam2 | oh, trying to cross-build | 15:14 |
ssam2 | ok | 15:14 |
ssam2 | i have no idea then | 15:14 |
ssam2 | it still looks a lot like there's a non-utf8 filename inside the sandboxed filesystem which something is choking on | 15:15 |
ssam2 | which like i said, is probably two bugs for the price of one | 15:15 |
jonathanmaw | also, I wedged in some print statements to look at what is causing the problem, and it's apparently getting paths that look like "/usr/bin/\xd3\xf7\x0c\x91@\x014\x9f\x12\xf1\x80\x02T!hw\xf8\xf8\x03\x14*\xe0\x03\x16\xaa\x94\x06\x91\xadk\x94\x81\xee|\xd3\xff\xff5\xa0\x03\x90\x18\x7f|\xd3 \x0c\x91\x18\x18\x8b\x0b@\xb9\xa0\n\xb9\x1f\x041\xa0T\xb6\xdfB\xa9\xb8\x1f@\xf9\x10\x14\x1f \x03\xd5\xa2\x80R\xa1\x03\x90\xa0\x03\xd0!\xe0\x1c\x91\x80\x01\x91w\x | 15:15 |
jonathanmaw | 1b\x94\xe3\x03\x16\xaa\xe2\x03\xaa\x01\x80R\x80R\xae\x95\x94\xb3\x02@\xf9\xb6\xdfB\xa9\xb8\x1f@\xf9\xe0\x03\x13\xaa\xafl\x94\xf4\x03\xaa\x05T\xb3\x02@\xf9fja8c\x02\x01\x8b\x05@\xf9\xe2\x03\x01*\xa5xfx\xc5\xfeo7\xdf\xbcq!\x01T\x7f9\x02\x044\xb3\x02@\xf9B\x04QcB\x8b\xb4\xe0\x03\x13\xaa\x83e\x94\xf3SA\xa9\xf5\x13@\xf9\xfd{\xcc\xa8\xc0\x03_\xd6\x80\x12\xa0" | 15:15 |
ssam2 | that is a little bit suspicious | 15:15 |
ssam2 | you could try looking through the cache to see which element contains those files | 15:16 |
gitlab-br-bot | push on buildstream@sam/embed-fusepy (by Sam Thursfield): 2 commits (last: Fork and embed fusepy) https://gitlab.com/BuildStream/buildstream/commit/ddcff898aac546a7b85e62eea0552f4166076f20 | 15:22 |
gitlab-br-bot | buildstream: merge request (sam/embed-fusepy->master: Embed fusepy and add support for ppc64 platforms) #95 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/95 | 15:23 |
gitlab-br-bot | buildstream: merge request (sam/embed-fusepy->master: Embed fusepy and add support for ppc64 platforms) #95 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/95 | 15:23 |
jonathanmaw | looking at the builddir left over in the cache, and in all the elements that stage2-binutils depends on, I can't see any files with mangled names in /usr/bin/ | 15:30 |
jonathanmaw | I'm going to cross my fingers that it's random disk corruption, and start a new build without the cache, to see if it reoccurs. | 15:30 |
ssam2 | possible | 15:30 |
gitlab-br-bot | push on buildstream@cross_platform (by Tristan Maat): 1 commit (last: _sandboxchroot.py: Remove devices present in a sysroot) https://gitlab.com/BuildStream/buildstream/commit/21154374ffdd0f3455c4869f9306d06f2947090f | 15:36 |
gitlab-br-bot | buildstream: merge request (cross_platform->master: WIP: Cross platform) #81 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/81 | 15:36 |
*** bochecha has joined #buildstream | 15:44 | |
ssam2 | what's the best way to exclude a file from the pep8 checks ? | 15:49 |
ssam2 | embedding fuse.py has broken the tests, because of course it doesn't follow the style | 15:49 |
ssam2 | i could edit pep8.sh to remove the glob | 15:49 |
ssam2 | remove fuse.py from the glob, I mean | 15:49 |
ssam2 | or get a bit more fancy and add a separate ext/ dir for things we've embedded ... | 15:49 |
bochecha | ssam2: you can set pep8ignore for that file | 16:02 |
bochecha | ssam2: there's an example here: https://pypi.python.org/pypi/pytest-pep8 (see "Configuring PEP8 options per project and file") | 16:03 |
bochecha | ssam2: and in fact, that's already done in setup.cfg for a few files :) | 16:03 |
ssam2 | aha, thanks | 16:04 |
jonathanmaw | :/ not caused by a dodgy cache | 16:05 |
ssam2 | oh dear. i strongly suspect issues in the FUSE layer | 16:05 |
ssam2 | especially if the dodgy filenames don't appear on disk anywhere | 16:05 |
jonathanmaw | :/ | 16:09 |
jonathanmaw | I'd try running it on ARM natively, but it's running an old baserock system, so won't have buildstream's dependencies | 16:10 |
jonathanmaw | so it's time to find a debian or fedora system for aarch64 | 16:11 |
jonathanmaw | aha, I can debootstrap | 16:11 |
gitlab-br-bot | push on buildstream@sam/embed-fusepy (by Sam Thursfield): 2 commits (last: Fork and embed fusepy) https://gitlab.com/BuildStream/buildstream/commit/9c83378250ecc6c62b5c32909640d64fd6888d3f | 16:16 |
gitlab-br-bot | buildstream: merge request (sam/embed-fusepy->master: Embed fusepy and add support for ppc64 platforms) #95 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/95 | 16:16 |
gitlab-br-bot | push on buildstream@sam/push-check-connectivity (by Sam Thursfield): 1 commit (last: bst push: Check connectivity to cache before trying to push) https://gitlab.com/BuildStream/buildstream/commit/0cf6c6bcd8e69554872b05efb267f5530a6b86d4 | 16:28 |
gitlab-br-bot | buildstream: merge request (sam/push-check-connectivity->master: bst push: Check connectivity to cache before trying to push) #96 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/96 | 16:31 |
*** jonathanmaw has quit IRC | 16:53 | |
*** ssam2 has quit IRC | 17:08 | |
*** tlater has quit IRC | 17:12 | |
gitlab-br-bot | buildstream: merge request (sam/push-check-connectivity->master: bst push: Check connectivity to cache before trying to push) #96 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/96 | 19:46 |
gitlab-br-bot | push on buildstream@master (by Jürg Billeter): 1 commit (last: bst push: Check connectivity to cache before trying to push) https://gitlab.com/BuildStream/buildstream/commit/0cf6c6bcd8e69554872b05efb267f5530a6b86d4 | 19:46 |
gitlab-br-bot | buildstream: Jürg Billeter deleted branch sam/push-check-connectivity | 19:46 |
*** tristan has quit IRC | 20:51 | |
*** bochecha has quit IRC | 20:59 | |
*** bochecha has joined #buildstream | 20:59 | |
*** tristan has joined #buildstream | 22:15 | |
*** tristan has quit IRC | 22:19 | |
*** bochecha has quit IRC | 23:28 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!