IRC logs for #buildstream for Wednesday, 2017-09-20

*** tristan has joined #buildstream07:27
*** ChanServ sets mode: +o tristan07:28
*** bochecha_ has joined #buildstream08:18
*** bochecha_ has quit IRC08:19
gitlab-br-botpush on buildstream@pytest-version (by Mathieu Bridon): 1 commit (last: Specify the minimum version of pytest required) https://gitlab.com/BuildStream/buildstream/commit/5633dedbcb2b0c27cfbe6edeac6ece015ece2a6008:21
tristanhah08:21
gitlab-br-botpush on buildstream@pytest-version (by Mathieu Bridon): 1 commit (last: Specify the minimum required version of pytest) https://gitlab.com/BuildStream/buildstream/commit/0a4f05f6a0486d6a58a83d50e9b607ddb747befe08:21
tristanI was just doing that08:21
gitlab-br-botbuildstream: merge request (pytest-version->master: Specify the minimum required version of pytest) #94 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/9408:22
*** jonathanmaw has joined #buildstream08:30
gitlab-br-botbuildstream: merge request (pytest-version->master: Specify the minimum required version of pytest) #94 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/9408:39
gitlab-br-botpush on buildstream@master (by Tristan Van Berkom): 1 commit (last: Specify the minimum required version of pytest) https://gitlab.com/BuildStream/buildstream/commit/0a4f05f6a0486d6a58a83d50e9b607ddb747befe08:39
gitlab-br-botbuildstream: Mathieu Bridon deleted branch pytest-version08:39
*** tristan has quit IRC08:43
*** ssam2 has joined #buildstream08:53
*** paulsherwood has joined #buildstream09:11
* paulsherwood finally fixes his irc09:12
paulsherwoodwhat have i missed? :)09:12
*** tlater has joined #buildstream09:18
ssam2:-( buildstream seems to have lost the ability to check that it can push to the artifact cache before pushing09:20
ssam2i just noticed my push job has been running for 10 minutes and is just failing repeatedly09:21
ssam2oh, maybe the issue is that `bst push` never did this09:25
*** tristan has joined #buildstream09:27
tristantlater, ok so regarding artifact caches; before I start rambling; you might want to ask what specifically you wanted to talk about09:41
tristanCause I could go in another direction and fail to answer your question :)09:41
tlatertristan: Well, when we originally discussed this branch, we left remote caches as a future discussion09:41
tlaterYou brought up kbas as well as just hosting tarballs inside an ostree repo.09:42
tristanRight, ok so that remains unimplemented in the fallback unix platform09:42
tlaterYup09:42
tlaterEssentially, the question is, which way are we moving, and when should this be done.09:43
tristanI never meant to host tarballs inside an ostree repo, but the most attractive conclusion to this whole thing afaics, will probably be to have non-ostree-supporting platforms; *speaking* to ostree artifact caches, in tar language09:43
tristanSo yeah there are few questions to sort out:09:44
tristan  o Can the unix platform land without remote artifact cache support first ?09:44
tristan  o Even on linux, our configuration surface for artifact caches sucks, it lets the user do stupid things like setting the push/pull urls of the same artifact cache to different locations09:45
tristan  o Can we make a statement that the configuration of artifact caches, and artifact cache implementations themselves, are subject to change at least in the initial release cycles, until we declare that stable ?09:46
tristanSo the history of this was: We didnt want anything else - and we very much desired to *not* have a remote artifact cache implementation09:47
paulsherwood?09:47
tristanIn other words, ostree with absolutely no weirdness and hackery around it, should have been enough.09:47
tristanWe wouldnt have had to implement anything at all to run somewhere else, that someone would have to install09:47
paulsherwoodoh. maybe.09:47
tristanBut this is becoming false because of fallback unix platforms, and because it looks like we are a far cry from upstreaming the ostree-push work09:48
paulsherwoodassuming your ostree idea works... from a user perspective on that...09:48
paulsherwood(without having ostree on my unix/mac/windows whatever)09:49
tristanpaulsherwood, not sure I follow that last... do you mean 'speaking tar to an ostree cache' ?09:49
paulsherwoodyup... let's imagine there is an ostree cache, and i'm using buildstream on a (population of) windows/mac/unix09:50
paulsherwood- is there any ability to assess status of the ostree-cache-server (eg a web interface)09:50
paulsherwood- any way to search for available artifacts?09:50
paulsherwood- any way to download an individual artifact?09:50
paulsherwood- any way to see how much space is available on the server machine?09:50
paulsherwood- any way to prune/cull artifacts to free up space?09:51
tristanTo be honest, you came up with this web interface thing a few months ago, but I had never planned in any way to have users interact directly with a remote artifact cache, without going through the buildstream CLI to do so.09:51
tristanNot that its a bad feature09:51
paulsherwoodtristan: it's not that 'i came up with it'... there are usecases where it's needed afaict...09:51
tristanbut I actually never even knew about it with ybd/kbas, or had any inclination to poke it09:51
tristanBut in any case, what I am thinking here is, we can have a more clean UX in general, and cleaner api surfaces, if we can only mandate that artifact shares be run on linux09:53
paulsherwoodalso, an inherent/implied requirement is that when we change cache-key calculation, we continue to support old ones on this server to some extent09:53
tristanWhich I feel is not a huge problem09:53
* tristan thought he had filed a bug about that09:54
paulsherwoodi'm ok with cache-server requiring linux. i'm ok with it requiring/being ostree... but it needs to be useable and the facilities i've highlighted above are required imo09:54
tristanpaulsherwood, there was discussion about that, not related to any "interfacing with artifact shares without using BuildStream to do so"... but09:55
tristanWe were thinking of hardening how artifacts are pulled and checked out, or at least have a method for checking out by artifact key09:55
tristanIn the sense that, only the keys listed in artifact metadata are followed09:56
tristan(any no buildstream cache key calculation is used for a checkout)09:56
tristanthat would solve that, but is orthogonal to users interacting with artifact shares without using `bst foo` to do so09:56
tristan(if it can be done one way, it can be done for both, anyway)09:56
tristanpaulsherwood, so to clarify; nice features for users to interact with a web server are cool, I like em :)   ... that whole web server idea though is out of scope, resource wise09:58
paulsherwoodtristan: i'm just sharing actual usecases. these facilities got added to kbas in response to situations that have arisen09:58
paulsherwoodit doesn't need to be a web service09:58
paulsherwoodbut as we've shown with kbas, it's not exactly hard to do it using a web service09:58
ssam2does kbas have those features listed above? how do i search for an artifact from http://artifacts1.baserock.org:8000/ ?10:00
paulsherwoodhttp://artifacts1.baserock.org:8000/* - or http://artifacts1.baserock.org:8000/linux* or http://artifacts1.baserock.org:8000/*foo*10:00
ssam2ah, ok10:01
paulsherwoodand the returned values are downloadable10:01
paulsherwood(links)10:01
ssam2i'd say that's roughly comparable with what we already have at https://ostree.baserock.org/cache/refs/heads/baserock/10:01
ssam2which is just the raw files on disk10:01
ssam2downloading won't work though10:01
tristanAnyway building something nice on top of that is not difficult10:01
ssam2but you can download using the `ostree` CLI10:01
paulsherwoodssam2: we have a live kbas containing .5million artifacts... is there a way of filtering that list?10:02
ssam2i'd use the ostree CLI for that10:03
paulsherwoodack10:03
ssam2`ostree refs | grep ...`10:03
paulsherwoodfair enough10:03
paulsherwoodwhat about culling?10:04
tristanSo there are few threads spinning now... but yes culling10:04
tristanSo, at the conference we had some discussion, and the idea of multiple artifact caches came up10:05
tristanFrom Juan and Angelos I believe (if I'm spelling names right)10:05
tristanThe idea being that, one should be able to setup production and development environments10:05
paulsherwoodnot sure that multiple cashes is relevant. the basic use case is... i have a cache, it's running out of space. how do i keep it running10:06
tristanSo, one would branch a project10:06
tristanpaulsherwood, it is...10:06
tristanOne would branch a project10:06
tristanAnd then create a separate artifact cache for those devs to push to10:06
tristanBut they would still fallback pull artifacts from mainline where they have not diverged10:06
tristanThe point being, development artifact caches can easily be nuked from existance10:06
paulsherwoodwe're talking about different things. i have a cache. it's out of space. it needs to free up some space. preferably automatically, but at a pinch i need to run something10:07
tristanAnd production artifacts contain every single thing which ever landed in production (stuff you dont want to delete).10:07
tristanI.e., we're talking about the sane alternative to trying to cull something10:07
tristanwhich doesnt really have a sensical culling strategy10:07
paulsherwoodi'm a user. i spun up this cache, i under-estimated how much space it needs. i'm on holiday. the cache runs out of space...10:08
tristanFor a local users artifact cache, perhaps a date based culling strategy is passable10:08
tristanBecause we assume that important things have anyway been pushed to a share10:08
paulsherwoodyes. let's assume that important things are kept. still... this cache is out of space.10:08
paulsherwood(note, i think 'cache' has different properties from 'archive', 'backup' etc)10:09
tristanRight, so this will be a good step forward, because there is *no way* to know which artifact is precious and which is not10:09
tristanBut the production/dev split, automatically provides this.10:09
paulsherwood(in my world view a cache is a temporary thing, to improve performance)10:09
tristanOk I see what you mean, the share is not exactly like this though10:10
paulsherwoodi don't know what 'the share' means here10:10
tristanHowever with the idea of multiple caches, you can kind of achieve both by saying, my devel share is a 'cache', while my production share is a 'store'10:10
paulsherwoodwhatever :) i still have this situation, the 'cache' is out of space. what happens?10:11
tristanWell, if its a devel share, you can just nuke it and nothing important is lost10:11
paulsherwoodi ask, because in actual experience, running out of space keeps being a common thing to kill services10:11
tristanAnd rebuilding from scratch will also reuse the base parts of the system where your devel branch has not diverged10:12
paulsherwoodtristan: i'm on holiday, and/or it's 3AM my timezone... there are 500 engineers using the service, and it's suddenly out of space10:12
paulsherwoodare you saying i have to babysit the space on this thing? :-)10:12
tristanWe're certainly not there in terms of automating it no10:13
paulsherwoodcan ostree actually be culled?10:13
tristanSo, Error, no space left on device; means your team of 500 engineers who suddenly lost an artifact share, basically cannot share new artifacts until someone kills the dev cache10:14
tristanWhich in the grand scheme of things, does not really prevent people from working10:14
tristanBut is a little inconvenient until monday yeah10:14
paulsherwoodtristan: if that's the outcome, ok. no possibility that running out of space kills the service?10:14
tristanYes ostree can be culled10:15
tristanNo10:15
tristanNo possibility of that really10:15
paulsherwoodyou're sure? :)10:15
tristanWell10:15
tristanI really dont see how10:15
tristanNote that a failed or interrupted artifact push will not leave garbage on the server10:16
tristanSo, if you want to push 1G and there is only 25MB, it will free up the 25MB after, but the pulls are anyway done over http10:16
paulsherwoodack10:17
tristanSo yeah ostree repos can be modified and pruned and such, but pruning strategies of ostree assume a workflow we dont really have10:17
tristanI.e. branches10:17
tristandoesnt really make sense for strong cache key lookups, each ref is strong and separate, so in any case we would have to know what is precious and what is not10:18
tristanAll of this started I think with a practical intent to discuss what we have to do regarding artifact shares in order to land cross-platform branch right ?10:19
* tristan feels like we drifted miles away10:19
paulsherwoodtristan: as i said, i believe the requirement is for a cache, not a store, so we don't really care about precious or not. precious can be handled in other ways10:20
tristanIn any case, what I ultimately *want*, is a single uniform way of configuring artifact shares, and running them on services10:20
paulsherwoodack10:20
tristanHowever if we want non-linux unix to have artifact shares *right now*, they will have to be different10:21
tristanSo there are few roads ahead, if we want tarball based artifact shares for non-linux right now; we run the risk that it will never change :)10:22
tristanBut, if we wanted, we could do it, and declare that artifact configurations may be subject to change...10:22
tristanwhich means, at one point in the next year, we roll out something which requires people to reinstall artifact caches and reconfigure them, once10:23
tristanSo that they are all a uniform API surface, giving us some implementation freedom behind that API10:23
paulsherwoodi'm just trying to make sure that the artifact *cache* approach is actually workable. the main reason for cache is to reduce build/integration times10:23
paulsherwoodreinstalling/configuring occasionally is ok, so long as overall productivity is not badly impacted10:24
tristanRight, I mean - I think that is not a very bad way to break API, so long as it really doesnt happen much10:24
tristanIt's not like breaking API on the actual YAML and stuff10:25
tristanthat stuff should be rock solid10:25
persiaI suspect some of the cache/store discussions are confusing things here.  The build/integration speedup happens when one wishes to build an artifact that is already built.  Ideally, this is pushed to one or more of the locations where BuildStream seeks to find prebuilt artifacts.  Where space runs out from one of those, builds will be slower until a new cache is made available on which people can store things.10:25
tristanpersia++10:25
persiaWhether the cache is "long term" (e.g. holds production artifacts), or "short term" (e.g. holds the last couple days of dev work), isn't that important.10:25
tristanthat would be the right now situation10:26
tristanindeed, and I think the idea of being able to configure multiple lets you make that decision pretty nicely10:26
persiatristan: That said, at some point it ought to be possible to set up a least-recently-used pruning mechanism for dev caches, etc.10:26
tristanleast-recently-used is far better than least-recently-created yes10:27
paulsherwoodis it only me who understands the term 'cache' expressly to be temporary, not long term?10:27
ssam2i agree on that meaning of 'cache', yeah10:27
paulsherwoodkbas does culls least recently used10:27
tristanpaulsherwood, I think it's only you who really minds about the terminology of it :)10:27
persiapaulsherwood: To explictly address the issue of you wanting to work with a cache I manage whilst I'm on the other side of the planet, off comms, and sleeping, you should be able to set up a temporary remote artifact store quickly and easily on your local cloud to use until I'm around and can help restore sanity.10:28
tristanBut ok, your point is that keeping precious things is not a feature of artifact caches10:28
paulsherwoodtristan: quite10:28
persiatristan: Perhaps "precious" isn't the best word here: I think you mean "artifacts one has an expectation of needing to use for speed, even if they haven't been used that recently".10:29
paulsherwoodpersia: it's 2017. having any manual step/process here should not be necessary10:29
tristanpaulsherwood, I cant say I'm personally satisfied with that as a user honestly, but in *any* case allowing multiple gives you a measure of flexibility, which allows keeping what you hold dear.10:29
persiatristan: In practice, continuing to run CI builds of supported branches is probably the easiest way to prevent that sort of thing from LRU-expiring.10:29
persiapaulsherwood: Fair enoughl10:29
paulsherwoodtristan: we could move on to a discussion about how to store precious things. but as a cache service user, i just want my build done as fast as possible, and i scream every time something gets rebuilt that shouldn't have been needed10:34
tristan:)10:34
* tristan stepped out for sec sorry...10:35
tristanSo culling is going to be important for GNOME as well10:35
tristanAnd unfortunately it's not easy to do10:35
tristanafaics, you either have to have a weird hack to observe/snoop the http traffic related to the repo, or you really need a service.10:36
tristanbecause otherwise the server has no way of knowing what was downloaded last10:36
tristanwould have to record that on disk in some way, and permission for uploading, generally requires higher clearance than downloading10:38
tristanbut downloading would now have to modify the disk10:38
paulsherwoodtristan: fwiw, the kbas approach is: on serving artifact, touch it. on uploading artifact, if not enough-space, cull lru to enough-space10:38
tristanyeah I was imagining that10:38
tristanWe would have to touch perhaps, symbolic refs in another tree - but still security is weakened when compared with a completely read-only approach for downloads10:40
tristananyway, with an ultimately unified artifact cache, a service is at least required I think10:40
paulsherwoodkbas also uses a directory for each artifact file (to achieve atomicity and avoid races on clashing uploads), so the touch happens on the directory, not the file itself10:41
paulsherwoodi think we've done this to death for today :)10:41
* tristan hopes we dont have to get into salting/hashing user passwords and stuff :-/10:41
tristanI guess that with a service, we could do auth-free downloads, and uploads over ssh and not have to handle auth ourselves10:43
paulsherwood+1, except that there needs to be some auth for private caches10:44
tristanright that could be an option when installing the artifact share on a server (and we would just do both over ssh I guess in that case)10:45
tristantlater, ok so; I think that *whatever* happens - we want to land cross platform branch without artifact share / kbas support10:48
tristantlater, just in the interest of breaking this down into chunks; the cross platform stuff is already an improvement without that feature in place10:49
tlatertristan: makes sense10:49
tristantlater, probably the verdict is going to be that we have two different artifact sharing techniques first and then hopefully unify it some time in the following year10:49
tlaterWould an implementation that can handle tar caches be a blocker for 1.0?10:50
tristanI dont think that's reasonable no10:50
tristanI mean, remote shared caches of course10:51
tristanalthough I dont think the 1.0 marker really changes anything here10:51
tristan(lets put it this way; 1.0 is a statement about our first stable API - if the artifact sharing parts are not going to be stable API for now anyway; there is no reason to block)10:52
tristantlater, looking good btw, one test failing probably due to the sandbox not cleaning up devices it created last time around ?10:55
tlatertristan: Perhaps, yeah, looking into it right now10:55
tlatertristan: I'll have to ask you about the sandbox RO stuff in a minute too, just want to sort this out first10:56
tlaterThen I think we can land quite soon :)10:56
tristanI have a little nitpic10:57
tristantlater, now we have _artifactcache/{abstract class, implementation1, implementation2}10:57
tristantlater, and we have _platform/{abstract class, implementation1, implementation2}10:57
tristantlater, it would probably be sensible to have the same for the sandbox10:58
tlaterYup, probably a good idea10:58
tlaterHuh, oddly enough the test doesn't fail on my local machine11:01
gitlab-br-botpush on buildstream@cross_platform (by Tristan Maat): 1 commit (last: _sandboxchroot.py: Remove devices present in a sysroot) https://gitlab.com/BuildStream/buildstream/commit/22b63bac82587c83e4c820b304273baaec1171f411:10
gitlab-br-botbuildstream: merge request (cross_platform->master: WIP: Cross platform) #81 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/8111:10
tlaterAlright, we'll see if that solves it in ~42 minutes :/11:12
tristanthat long ?11:12
tristansometimes its slower than others11:12
tlaterThe tar cache is a lot slower, unfortunately11:12
tlaterEspecially ebcause it can't always hardlink11:12
tristanah11:13
tlaterOn this discussion: https://gitlab.com/BuildStream/buildstream/merge_requests/81#note_4040641311:13
tlaterI think I entirely misunderstand it, because I think it was already implemented like that - albeit with a different default11:13
tlater(Which I changed, as well as making it a bit more public, but the underlying implementation remains the same)11:15
tristantlater, yeah it looks like you did that right; I however dont like this "trip read-write" thing approach that was there11:15
tristanwhere was that11:16
tlatertristan: I removed the trip read-write and replaced it with, well, the opposite11:16
tristanBut where were you calling this again ?11:16
* tristan searches the patch11:16
tlaterIn buildelement and scriptelement11:17
tristangitlab slow11:17
tristanyeah but where, in configure ?11:17
tlaterYup11:17
tlaterconfigure-sandbox11:17
* tristan thinks probably it's just the API name that doesnt sit well11:17
tristanah so you made it set an attribute11:17
tristanumm, do we do that anywhere else ?11:18
* tristan thinks at least a function call is in order11:18
tlaterIn all element classes now11:18
tristanunless it's consistent with something ?11:18
tlatertristan: No, it should be a function call11:18
tristantlater, where do we set attributes on other objects directly to tell them how to behave ?11:18
tristanAh, ok agreed.11:18
tlaterNot sure what I was thinking11:18
tristanlooking at the patch, you still have _runs_integration()11:20
tlatertristan: Yeah, but that only states that the current element has integration commands11:20
tlaterIt's a shortcut around sifting through public data11:20
tristanyeah I know11:21
tristanSo I'm thinking what the right public API would be for this; or if it can be just a list comprehension copy/pasted everywhere11:21
tlaterIt sounds like a big list comprehension11:22
* tlater is gone for a bit11:22
tristanMaybe11:22
*** tristan has quit IRC11:46
*** tristan has joined #buildstream11:46
*** ChanServ sets mode: +o tristan11:46
*** tristan has left #buildstream11:46
*** tristan has joined #buildstream13:29
*** ChanServ sets mode: +o tristan13:29
*** tristan has quit IRC13:40
*** tristan has joined #buildstream13:44
*** jonathanmaw has quit IRC14:32
*** jonathanmaw has joined #buildstream14:34
jonathanmawI'm getting the error "https://pastebin.com/mDfW19EP" when I run `bst --target-arch=armv8l64 build bootstrap/stage2-sysroot.bst`14:38
jonathanmawhas anyone seen that before? it looks like something's gone very wrong with fuse14:38
jonathanmawcurrently trying the ppc64b version, to see if it's unique to armv8l6414:41
ssam2at a glance it looks like something is assuming filenames are valid UTF-8, and is encountering a filename which is not14:44
ssam2so there are probably two issues, one that a codepath in buildstream is treating filenames as text rather than binary strings, and the other that something is producing weird non-utf8 filenames14:45
ssam2full logs would help, that traceback is only partial14:46
jonathanmawssam2: the full log seems to be 5000 lines long15:01
jonathanmawssam2: here it is, anyway https://pastebin.com/E3HuQuuv15:05
ssam2oh I have another idea what this might be15:10
ssam2oh maybe not15:11
ssam2fuse.py does already support aarch6415:12
ssam2https://github.com/terencehonles/fusepy/blob/0eafeb557e0e70926ed9450008ef17057d302391/fuse.py#L21215:12
ssam2double check that platform.machine() is returning  'aarch64' .. if not, fuse.py will be breaking15:12
ssam2in fact it does look a lot like a fuse.py bug15:12
ssam2as it occurs just as the integration commands are run15:13
ssam2which i think is the first time the FUSE SafeHardlinks filesystem kicks in15:13
jonathanmawssam2: I'm currently running this on an x86 machine15:14
jonathanmawwith --target-arch=armv8l6415:14
ssam2oh, trying to cross-build15:14
ssam2ok15:14
ssam2i have no idea then15:14
ssam2it still looks a lot like there's a non-utf8 filename inside the sandboxed filesystem which something is choking on15:15
ssam2which like i said, is probably two bugs for the price of one15:15
jonathanmawalso, I wedged in some print statements to look at what is causing the problem, and it's apparently getting paths that look like "/usr/bin/\xd3\xf7\x0c\x91@\x014\x9f\x12\xf1\x80\x02T!hw\xf8\xf8\x03\x14*\xe0\x03\x16\xaa\x94\x06\x91\xadk\x94\x81\xee|\xd3\xff\xff5\xa0\x03\x90\x18\x7f|\xd3 \x0c\x91\x18\x18\x8b\x0b@\xb9\xa0\n\xb9\x1f\x041\xa0T\xb6\xdfB\xa9\xb8\x1f@\xf9\x10\x14\x1f \x03\xd5\xa2\x80R\xa1\x03\x90\xa0\x03\xd0!\xe0\x1c\x91\x80\x01\x91w\x15:15
jonathanmaw1b\x94\xe3\x03\x16\xaa\xe2\x03\xaa\x01\x80R\x80R\xae\x95\x94\xb3\x02@\xf9\xb6\xdfB\xa9\xb8\x1f@\xf9\xe0\x03\x13\xaa\xafl\x94\xf4\x03\xaa\x05T\xb3\x02@\xf9fja8c\x02\x01\x8b\x05@\xf9\xe2\x03\x01*\xa5xfx\xc5\xfeo7\xdf\xbcq!\x01T\x7f9\x02\x044\xb3\x02@\xf9B\x04QcB\x8b\xb4\xe0\x03\x13\xaa\x83e\x94\xf3SA\xa9\xf5\x13@\xf9\xfd{\xcc\xa8\xc0\x03_\xd6\x80\x12\xa0"15:15
ssam2that is a little bit suspicious15:15
ssam2you could try looking through the cache to see which element contains those files15:16
gitlab-br-botpush on buildstream@sam/embed-fusepy (by Sam Thursfield): 2 commits (last: Fork and embed fusepy) https://gitlab.com/BuildStream/buildstream/commit/ddcff898aac546a7b85e62eea0552f4166076f2015:22
gitlab-br-botbuildstream: merge request (sam/embed-fusepy->master: Embed fusepy and add support for ppc64 platforms) #95 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/9515:23
gitlab-br-botbuildstream: merge request (sam/embed-fusepy->master: Embed fusepy and add support for ppc64 platforms) #95 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/9515:23
jonathanmawlooking at the builddir left over in the cache, and in all the elements that stage2-binutils depends on, I can't see any files with mangled names in /usr/bin/15:30
jonathanmawI'm going to cross my fingers that it's random disk corruption, and start a new build without the cache, to see if it reoccurs.15:30
ssam2possible15:30
gitlab-br-botpush on buildstream@cross_platform (by Tristan Maat): 1 commit (last: _sandboxchroot.py: Remove devices present in a sysroot) https://gitlab.com/BuildStream/buildstream/commit/21154374ffdd0f3455c4869f9306d06f2947090f15:36
gitlab-br-botbuildstream: merge request (cross_platform->master: WIP: Cross platform) #81 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/8115:36
*** bochecha has joined #buildstream15:44
ssam2what's the best way to exclude a file from the pep8 checks ?15:49
ssam2embedding fuse.py has broken the tests, because of course it doesn't follow the style15:49
ssam2i could edit pep8.sh to remove the glob15:49
ssam2remove fuse.py from the glob, I mean15:49
ssam2or get a bit more fancy and add a separate ext/ dir for things we've embedded ...15:49
bochechassam2: you can set pep8ignore for that file16:02
bochechassam2: there's an example here: https://pypi.python.org/pypi/pytest-pep8 (see "Configuring PEP8 options per project and file")16:03
bochechassam2: and in fact, that's already done in setup.cfg for a few files :)16:03
ssam2aha, thanks16:04
jonathanmaw:/ not caused by a dodgy cache16:05
ssam2oh dear. i strongly suspect issues in the FUSE layer16:05
ssam2especially if the dodgy filenames don't appear on disk anywhere16:05
jonathanmaw:/16:09
jonathanmawI'd try running it on ARM natively, but it's running an old baserock system, so won't have buildstream's dependencies16:10
jonathanmawso it's time to find a debian or fedora system for aarch6416:11
jonathanmawaha, I can debootstrap16:11
gitlab-br-botpush on buildstream@sam/embed-fusepy (by Sam Thursfield): 2 commits (last: Fork and embed fusepy) https://gitlab.com/BuildStream/buildstream/commit/9c83378250ecc6c62b5c32909640d64fd6888d3f16:16
gitlab-br-botbuildstream: merge request (sam/embed-fusepy->master: Embed fusepy and add support for ppc64 platforms) #95 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/9516:16
gitlab-br-botpush on buildstream@sam/push-check-connectivity (by Sam Thursfield): 1 commit (last: bst push: Check connectivity to cache before trying to push) https://gitlab.com/BuildStream/buildstream/commit/0cf6c6bcd8e69554872b05efb267f5530a6b86d416:28
gitlab-br-botbuildstream: merge request (sam/push-check-connectivity->master: bst push: Check connectivity to cache before trying to push) #96 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/9616:31
*** jonathanmaw has quit IRC16:53
*** ssam2 has quit IRC17:08
*** tlater has quit IRC17:12
gitlab-br-botbuildstream: merge request (sam/push-check-connectivity->master: bst push: Check connectivity to cache before trying to push) #96 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/9619:46
gitlab-br-botpush on buildstream@master (by Jürg Billeter): 1 commit (last: bst push: Check connectivity to cache before trying to push) https://gitlab.com/BuildStream/buildstream/commit/0cf6c6bcd8e69554872b05efb267f5530a6b86d419:46
gitlab-br-botbuildstream: Jürg Billeter deleted branch sam/push-check-connectivity19:46
*** tristan has quit IRC20:51
*** bochecha has quit IRC20:59
*** bochecha has joined #buildstream20:59
*** tristan has joined #buildstream22:15
*** tristan has quit IRC22:19
*** bochecha has quit IRC23:28

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!