IRC logs for #buildstream for Monday, 2018-06-25

*** tristan_ has joined #buildstream00:09
*** tristan has quit IRC00:09
*** tristan_ has quit IRC02:06
*** tristan_ has joined #buildstream03:12
*** slaf has quit IRC05:12
*** slaf has joined #buildstream05:14
*** slaf has joined #buildstream05:14
*** slaf has joined #buildstream05:15
*** slaf has joined #buildstream05:15
*** slaf has joined #buildstream05:15
*** slaf has joined #buildstream05:15
*** slaf has joined #buildstream05:16
*** slaf has joined #buildstream05:16
*** Prince781 has quit IRC05:32
*** Prince781 has joined #buildstream05:42
*** bochecha_ has joined #buildstream06:11
gitlab-br-botbuildstream: merge request (tristan/docs-integration-commands->master: docs: Adding section on integration commands) #517 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/51706:34
gitlab-br-botbuildstream: merge request (tristan/docs-integration-commands->master: docs: Adding section on integration commands) #517 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/51706:38
*** bochecha_ has quit IRC06:56
*** bochecha_ has joined #buildstream06:56
*** Prince781 has quit IRC06:57
*** bochecha_ has quit IRC07:01
gitlab-br-botbuildstream: merge request (tristan/docs-integration-commands->master: docs: Adding section on integration commands) #517 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/51707:05
*** bochecha_ has joined #buildstream07:15
gitlab-br-botbuildstream: merge request (tristan/docs-integration-commands->master: docs: Adding section on integration commands) #517 changed state ("merged"): https://gitlab.com/BuildStream/buildstream/merge_requests/51707:37
*** coldtom has joined #buildstream07:54
*** Phil has joined #buildstream07:57
*** tristan_ has quit IRC08:36
*** jonathanmaw has joined #buildstream08:38
*** tiagogomes has joined #buildstream08:53
*** bethw has joined #buildstream08:53
*** dominic has joined #buildstream08:56
*** tiagogomes has quit IRC08:57
*** tiago has joined #buildstream08:59
noisecellgood morning09:21
noisecellwhen doing a "bst checkout" it seems that the artifact elements are copy to the directory selected, have we though to use hard links instead of a copy to waste developer space? e.g. : https://paste.baserock.org/zeguleduge09:23
noisecells/to waste developer space/to avoid use a lot of the developer disk space/09:24
tlaternoisecell: `bst checkout` has a `--hard-links` option (or suchlike), which you can use instead09:32
tlaterKeep in mind that this is linked directly to the artifact, so use it with care09:33
noisecelltlater, oh, my bad. Thank you for pointing me to this :)09:35
*** ernestask has joined #buildstream09:56
*** aday has joined #buildstream10:14
laurenceok, we have a mailing list for buildgrid10:16
laurencejmac, finn, making us the 3 admins for now10:16
laurenceif that's ok10:16
finnthanks10:26
laurencehave you recieevd the admin mail10:26
laurence?10:26
jmacNothing yet10:27
finnnot yet10:27
laurenceforwarded10:28
tlaterCan we get a link to this mailing list? Perhaps you should email to the buildstream ML about this, considering we had an email stating that it would also be used for buildgrid a few weeks ago.10:31
laurencetlater, that's the plan10:37
laurencetlater, which mail do you refer to, ooi?10:37
laurencei don't recall that specifically, but could be my memory10:37
tlaterHm, I can't find it right now, perhaps my memory is off...10:39
*** cs_shadow has joined #buildstream10:46
tlaterIs there a way to "block" an issue on gitlab?10:56
tlaterI have #438 which can probably not be worked on much before another issue is fixed10:57
*** aday has quit IRC10:57
*** aday has joined #buildstream11:10
laurencetlater, I think a comment in the description is best, with a link.11:22
laurencethink that's all you can do11:22
tlaterAww11:22
noisecellis there any way to define "Strict build plan" inside of project.conf? or this is only an user option?11:28
noisecellhttps://buildstream.gitlab.io/buildstream/using_config.html#user-config <-- for reference11:29
noisecellPhil, tlater ^^ ?11:29
tlaterSorry, not certain.11:32
*** laurence has left #buildstream11:33
Philnoisecell, I don't know, sorry.11:37
noisecelltlater, Phil, no worries.11:40
noisecellcan I assume that --no-strict will work for any command? or it will work for one?11:44
noisecellit is not really clear in that "Note"11:45
noisecelland I've checked that it doesn't work for `bst build`11:45
noisecell$ bst build --no-strict systems/minimal-system.bst11:46
noisecellError: no such option: --no-strict11:46
tiagoOn buildstream examples, it would be nice to display the output of `tree` under the project structure section to aid understanding the layout11:50
tlaternoisecell: You need `bst --no-strict build systems/minimal-system.bst`11:51
tlaterIt's a flag on `bst` rather than `bst build`11:51
tlaterWhich also means it works for every command :)11:51
gitlab-br-botbuildstream: merge request (relative_workspaces->master: WIP: Patch for issue #191 support relative workspaces) #504 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/50411:54
noisecelltlater, I see, thanks12:40
*** bochecha_ has quit IRC12:46
*** ernestask has quit IRC12:49
gitlab-br-botbuildstream: merge request (relative_workspaces->master: WIP: Patch for issue #191 support relative workspaces) #504 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/50412:59
gitlab-br-botbuildstream: issue #444 ("Reopening a closed workspace is not intuative") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/44413:35
*** Prince781 has joined #buildstream14:43
gitlab-br-botbuildstream: merge request (coldtom/275->master: WIP: Indicate where artifacts are going to and coming from in the log) #518 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/51814:52
*** laurence has joined #buildstream15:07
*** mohan43u has quit IRC15:13
*** mohan43u has joined #buildstream15:20
*** mohan43u has quit IRC15:20
*** tristan_ has joined #buildstream15:23
*** mohan43u has joined #buildstream15:24
*** bochecha_ has joined #buildstream15:24
*** mohan43u has quit IRC15:25
*** mohan43u has joined #buildstream15:27
*** mohan43u has joined #buildstream15:29
* tlater spots a tristan_15:39
tlaterI'd like to talk through the artifact expiry branch, if you have time15:39
*** slaf has quit IRC15:50
*** slaf has joined #buildstream15:50
*** slaf has joined #buildstream15:50
*** slaf has joined #buildstream15:51
*** dominic has quit IRC16:02
*** Prince781 has quit IRC16:10
*** Prince781 has joined #buildstream16:12
*** dominic has joined #buildstream16:14
*** dominic has quit IRC16:14
*** Prince781 has quit IRC16:17
*** tristan_ is now known as tristan16:25
* tristan peels a spot off of his face16:25
tristantlater, certainly, me too :)16:26
tristantlater, was in meeting before sorry16:26
tristanstill have some time ?16:26
tlaterYep :)16:27
tristantlater, ok lets try to go over as much as possible, starting with the api for job behaviors thing16:27
* tristan prepares a browser tab16:27
* tristan powers his laptop16:28
tlaterTsk, should always be on ;p16:28
tristanattached chord16:28
tristansucks when it decides to just go off on it's own16:28
tristan!34716:29
* tlater hopes to never see that number again come Wednesday16:29
tristanbig page to load16:30
*** jcampbell is now known as j1mc_polari16:31
tristantlater, alright I think I'm refreshed (for as far as I've understood so far)16:33
tristantlater, do you understand what I mean in https://gitlab.com/BuildStream/buildstream/merge_requests/347#note_83809143 ?16:33
tlaterTo save some loading; is that the part just under JOB_TYPES?16:34
tlaterAh, yes16:34
tlaterOk, so I understand that to a degree16:34
tristanAlright, so basically, first forget QueueType and it's members, it's a misnomer16:34
tlaterYep, I'd even like to rename those now.16:35
tristanor the names are misleading, they are intended to depict a specific /behavior/, not inform the core of a specific queue and /it's behavior/16:35
tlaterNow, Job types are still different from Queue types16:36
tristanthe way it currently works (worded this way because I expect your branch changes things significantly), is that the scheduler is layered with the concrete queue types (and the Scheduler itself) at the highest level16:36
tristanI guess what I'm trying to explain, is the reasoning behind this is to keep the business logic on top, and keep the lower/core levels mechanic16:37
tristandont let knowledge of the business logic creep in16:38
tristanto the abstract queue.py class or other components which are focused on providing the features for the business logic layer16:38
* tlater struggles to define something as "business logic" 16:39
laurencePhil, thanks a lot for the status update on the ticket !! - https://gitlab.com/BuildStream/buildstream/issues/437#note_8401303816:39
tristantlater, I see what you mean... this is all very mid-layer stuff16:39
tristantlater, in literal terms, I'd do backwards what you have done16:40
*** Phil has quit IRC16:41
tlaterI.e., instead of having the scheduler figure out when to start running jobs, you'd rather have the jobs figure out when they are allowed to run?16:41
tristanI.e. right now, iiuc, "The code which declares the job gives the scheduling logic it's $type, and the scheduling logic assigns a behavior to that"16:41
tristanInstead I would have "The code which declares the job informs the scheduling logic of the desired behavior"16:41
tristandirectly16:42
tlaterAlright, I understand that on a high level16:42
tlaterMy problem is trying to see how this api would work16:42
tristanThis basically keeps business logic parts in the right places... in this case the example is what features a Queue concrete class implement... the decisions a Queue class makes should all go in it's file16:43
tlaterHm, I suppose each job could carry information that tells the scheduler what kinds of jobs it's allowed to run with?16:44
tristanSo that's the part I need a refresher on16:44
tristanHow is this part defined, let's see16:44
tlaterCurrently, the scheduler picks off jobs from the queues whenever they become ready, but only starts running them when it has asserted the various conditions are met16:46
tlaterThe conditions here aren't the usual "has my dependency been built", but "will I completely screw over another currently running job if I run"16:47
tlaterWhich in my mind certainly is scheduler logic, and shouldn't go down to the queues/jobs16:47
gitlab-br-botbuildstream: merge request (jmac/cas_virtual_directory->jmac/googlecas_and_virtual_directories_2: WIP: CAS-backed virtual directory implementation) #481 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/48116:52
tristantlater, sorry16:53
tristanI was going to say something here and said it in comments also...16:53
tlaterAh16:53
tristantlater, looks like you have some cleanup of what is public/private to do also, and some missing API docs comments16:54
tlaterI agree, I was half expecting a comment on that anyway16:54
tristanRight... so...16:55
tristanWhat is the ideal logic16:55
tristanI also checked out a local branch16:55
*** coldtom has quit IRC16:55
tristantlater, how does the scheduler pick off jobs from the queues ?16:55
tristanschedule_queue_jobs()16:56
tristanok, so scheduler is the one asking the jobs individually16:56
tristantlater, how come there is still a Queue.active_jobs member ?16:56
tristanis it required elsewhere ?16:57
tlaterThat's still there for frontend purposes16:57
tlaterThe frontend wants to know which jobs belong to which queues, unfortunately.16:58
tlaterHm, now that I think about that again, perhaps the frontend could ask the jobs what queue they belong to instead16:58
tlaterIt still wants to know total number of jobs running in each queue, though, so that won't work16:59
tlaterGah, well, in either case, the frontend is a bit entangled in this logic still16:59
tlaterAnd it's quite hard to break out completely. I'd rather leave that for a separate MR.17:00
tristanerrr, in any case what is here is futzed17:00
tristantlater, or, where do you keep the Queue.active_jobs lists up to date ?17:01
tristantlater, ok I'll comment on the issue there...17:02
tlaterYep, I can have a look at that later - iirc the frontend at least updates some of those states17:02
tlater(Which again is annoying)17:03
albfan[m]tristan: heading to my first 70Gb gnome build17:04
* albfan[m] setup a flatpak gnome runtime to test cached build17:06
* albfan[m] setup a build with flatpak gnome runtime to test cached build trees17:07
tristantlater, grep tells me nothing updates that17:08
tlaterHm, you're right, it's only failed elements17:10
tristannoted in the MR...17:10
tristantlater, let's just not maintain it in two places17:10
tristantlater, I would also very much like to exclude Queues from what the frontend consumes as API if possible, but I think that's a song and dance and out of scope17:11
tristanmaybe if we could start with moving the active/failed/skipped jobs out of there and use an API on scheduler to report those as needed, it can be a good step17:12
tlaterYep, that seems alright17:13
tristanso this branch changes things so that storage of the running tasks is not delegated to a distributed handful of lists in Queues17:13
tristaninstead the active jobs are handled by the scheduler17:14
tristanBut, we do want per queue statistics (thinking out loud here)17:14
tristanincluding a window on the active jobs17:14
tristanWe're adding more code about queues to scheduler, that's not ideal either17:14
tristanMaaaaaybe it's better to settle for Queues being part of the frontend facing API17:15
tristantlater, I think that's less code and work all around17:15
tristanhowever, we *have* to avoid the maintaining of two cached list values17:15
tristantlater, before this, we didnt have waiting jobs, why do we have it now ?17:16
tlatertristan: Because of the condition that some jobs aren't allowed to start running until a cleanup job has finished, but a cleanup job may not be allowed to start because other jobs are already running17:17
tlaterSo we end up with a few jobs that, while technically capable of running, can't run because they'd end up running in parallel with others17:17
tristanRight17:18
*** jonathanmaw has quit IRC17:18
tristantlater, I think it's sensitive to carry these ready jobs around17:19
tristantlater, and I think carrying them around, is because we don't have a Queue.peek_ready() method17:19
*** bethw has quit IRC17:20
tlaterI think that would do it, yes.17:20
tlaterIn which case we could avoid the waiting job, and perhaps even the active job queues17:22
tristanWe need the active jobs of course17:23
tristanso we can suspend them and terminate them17:23
tristanand resume them17:23
tristanok so far not bad...17:25
*** jonathanmaw has joined #buildstream17:34
tristantlater, ok so I think most of what we looked at is pretty straight forward, mostly the question is; what is the right API for the main issue17:35
tristanjob behaviors17:35
tlaterMy new idea is to essentially piggyback the current dict on job objects17:36
tristandict-on-job-objects ?17:37
tlaterNot quite. Job objects have "exclusive" and "priority" attributes17:38
tlaterWhich encodes the same information17:38
tristanSo the scheduler pops off these jobs from queues... to be honest now that we look at it... having a QueuePool implement a JobSource, and having another JobSource for cleanup jobs, would be really nice17:38
*** jonathanmaw has quit IRC17:38
tristananyway... woosh that's not a necessary refactory haha :)17:38
tristanAh right17:38
tristantlater, ok so, by feeding the scheduler a job with the exclusive and priority attributes, the scheduler knows what to do ?17:39
tristantlater, I think you need more than that17:39
tristanif I understand the values of these attributes are not simple17:39
tristanthey require knowledge of the color of other jobs being processed17:40
tlaterYes, but the scheduler knows about those17:40
tlaterThe question is; should it?17:41
tristanNo, I'm looking for the code block again but almost certainly no it should not17:41
tlaterIt's in sched()17:41
tristanJOB_RULES, right17:42
tlaterThe any() statements17:42
tristanright right, that's certainly gotta go17:42
tristanthe thing is, you are saying that a job is "exclusive of other specific job types"17:43
tristanthat's what we gotta rethink17:43
tristantlater, I have an idea17:45
tristantlater, basically, what we want to say to the scheduler, is one of three things: "I dont care about resource FOO", "I need to access resource FOO" and "I need exclusive access to resource FOO"17:46
tristantlater, where resource FOO is the "artifact cache" here17:46
tristantlater, so maybe we want domains and a read-write lock kind of API for those domains17:46
tristandoes that work for these cases ?17:47
tristanit works for "exclusive"17:48
tristanthat is the lock, it could be extended to cover more things I suppose17:48
tristanpriority could be: "I want nothing of lower priority to be put in a run state before me"17:49
tlaterHm17:50
tlaterIt does work for exclusive, yes, but it will have three states17:50
tristannot sure about priority17:50
tlaterPriority is harder, yes17:50
tlaterIt still requires knowledge of other jobs17:50
tristanexclusive has three possible values yes17:50
tristanHrrrmmm17:52
tristantlater, do we need priority at all ?17:52
tlaterI left a comment about that on JobType17:52
tlaterOr JOBTYPE?17:52
tristantlater, first better question, why does CLEAN not have priority over PUSH ?17:53
tlaterBecause PUSH does not modify the artifact cache17:53
tristanif CLEAN needs to wait for all PUSH jobs to complete, I would think it should have priority17:53
tlaterIt simply needs a read lock17:53
tlaterIt does not need priority, because the cache size won't be changed by push jobs17:54
tristanright but it looks like you risk starvation17:54
tlaterHow?17:54
tristanif for instance I'm building something with large artifacts and I have to push a lot over a sluggish network17:54
tlaterYes, agreed17:55
tristanmy build jobs wont run because I have a pending clean17:55
tlaterBut my pushes are taking forever17:55
tristanand clean wont run because of pushes, push bottlenecks me17:55
tlaterHmmm17:55
tlaterDo we even need to worry about push?17:55
tristanso I'd rather have everything built with a big full pending queue in push if that's the case17:55
tlaterTechnically we can never push an artifact that will be deleted17:55
tristantlater, there was another point to this...17:56
tristanwhich is: lets just remove priority from the API17:56
tristantlater, the one who creates the Job says what resources it needs to access17:57
tristantlater, maybe for now it's a tristate, if we have more resources though it should be a hash table of booleans I guess17:57
tristani.e. the read-write locks17:57
tristanand the scheduler takes care of guarding against writer starvation by making it priority17:58
tlatertristan: so we consider PUSH exclusive as well?17:58
tristanwhy should PUSH be exclusive ?17:59
tristanthat seems strange17:59
tristantlater, we have to make CLEAN exclusive of PUSH17:59
tristanno choice17:59
tlaterYes, that is what I meant :)17:59
tlaterDo we consider other resources besides the cache yet?18:01
albfan[m]Is there any open issue about download speed, I find it downloads so slow, at least flatpak runtimes (comparing with flatpak install) I tried high numbers on fetchers but no changes18:01
tristanso, it's a scheduler with read-write locking of resources, which prioritizes the exclusive locks by not allowing any further tasks to run which require the same resource, until it18:01
tristanalbfan[m], if you flatpak install a runtime that you download with ostree clone or an ostree element in bst build... flatpak is faster ??18:02
tristanalbfan[m], that sounds quite odd18:02
tristanalbfan[m], are you using https maybe ?18:03
tristanwhen you have the gpg key for an ostree repo, it should usually be quicker to use http18:03
tristantlater, I dont know what other resources though :)18:04
tristanheh18:04
tristantlater, for now I guess it's a tristate18:04
tristantlater, wait a sec...18:05
tristantlater, there is a BETTER idea, but it hasn't completely taken form18:05
tristantlater, the better idea is that  A.) There are resources... B.) Some jobs need them, sometimes exclusively... C.) Some resources are limited18:06
albfan[m]tristan: like 8Mb/s with flatpak and 800Kb/s with bst18:06
tristantlater, then we get to replace job tokens and QueueType with a limited resource type18:07
tristana limited resource means of course that only a limited number of jobs can use it at the same time18:07
tlaterOh18:07
tlaterYes, that's cute18:07
tlaterAnd we only have one cache, so that can only be exclusively used by one job at the same time18:08
tristanwell, I think each resource is a "name" anyway :)18:09
tristaneven if we had two caches, I dont know if a job would be satisfied exclusively accessing a given one18:09
tristan(at random)18:10
tristanCPUs could be more elaborate and different and fancy of course, but ... for now it's just processing18:10
tlaterYes, I don't think multiple caches make much logical sense in our case18:10
albfan[m]bad thing too is that stopping current download I loose all fetched bits, there's no recover from what I have downloaded if I ctrl+c18:10
tlaterI'm just thinking about how the implementation would handle it :)18:10
albfan[m]but yes gnomesdk: https://sdk.gnome.org/ is in my project.conf18:11
tristanalbfan[m], hrmmmm, care to file an issue about that ?18:11
tristanalbfan[m], sounds completely overlooked18:11
tristanattempted resumed downloads would be awesome but I wouldn't hold my breath18:12
tristanat least ability to let the user try to deal with it, in the rare case that the user wants the tempdir / partial download, could be easy18:13
albfan[m]tristan: I found the place for that buildstream/_ostree.py fetch I will try to compare with flatpak install18:13
tristantlater, so... sounds like we have a good plan I think ?18:15
tlaterYep, this sounds good18:16
tlaterNot too hard to implement either, I think, so that's a plus18:16
tristantlater, before wrapping up18:17
tristantlater, where does the cleanup job get created ?18:17
tlaterBuildQueue, iirc18:17
tlaterAfter it finishes a build job18:18
tristanlooks like you stopped using that stateful _start_cleanup var thing, but left it in the Scheduler's __init__18:18
tlaterEep18:19
* tlater assumes this happened while rebasing18:19
tlaterThat was a very messy rebase |:18:19
tristantlater, looks like you can easily schedule more than one cache size calculator job18:20
tlaterYes... I saw little reason not to18:20
tristanlooks like it would be nice to have a home for this logic to live18:21
tlaterExcept for taking longer18:21
tristanrather than a mutant limb of BuildQueue18:21
tristani.e. ensuring there is only ever one, knowing that it's queued and you dont need two of them18:21
tristantlater, looks like you also fail to do it in the PullQueue18:22
tlaterAh right18:23
tristanafter having added data to the artifact cache and increased it's size18:23
tlaterHm18:23
tristanmaybe best to have some helper function directly on Scheduler in this case18:23
tlaterI used to have that, I suppose that's nicer after all18:23
tristanlike to manage if there is an ongoing job internally and not redundantly queue it18:24
tristanit's a concession, would be nice to not have the scheduler know about the cache specifically18:24
tristanit's still a mutant limb, but its now shared by multiple other limbs18:24
tlaterYeah, eventually it might be nicer to have something else launch these jobs18:25
albfan[m]tristan: let me use first flathub instead of sdk.gnome.org18:28
tristantlater, I feel like event source objects is the way to go18:29
tristantlater, our own little loop, every time the scheduler wakes up it checks what event sources are ready to be dispatched, and we have one which does the element queues and resulting side effect events18:30
tristanbut meh18:31
tristanit's possibly over engineered :)18:31
tristan(not "our own little loop" really, still asyncio underneath of course)18:31
tlaterHm, well, ok, we have a plan for now18:31
* tlater hopes he'll manage before juergbi returns and screws over his branch *again* ;)18:32
tristanI'm rooting for you18:35
tristan:)18:35
*** tristan has quit IRC18:39
albfan[m]tried with flathub, definitely slower, I will do some research and file a bug18:41
albfan[m]flatpak (8Mb/s) ostree (6Mb/s) and Ostree from python (Builstream) 1Mb/s, asking on #flatpak for more info18:53
albfan[m]have to make a try without any download in ~/.local/share/flatpak/19:00
gitlab-br-botbuildstream: merge request (caching_build_trees->master: Caching buildtrees) #474 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/47419:01
albfan[m]anyway. I have my gtksourceview builded against gnome sdk with the caching_build_trees branch (just rebased it on top of master) let's try to run some test!19:02
*** laurence has left #buildstream19:27
*** laurence has joined #buildstream19:27
gitlab-br-botbuildstream: merge request (caching_build_trees->master: Caching buildtrees) #474 changed state ("opened"): https://gitlab.com/BuildStream/buildstream/merge_requests/47420:58
albfan[m]oh, hehe this ^^ is my rebase (I was confused)21:09
albfan[m]tristan: is it supposed that cached build trees could be used into a bst shell?21:11
*** Prince781 has joined #buildstream21:13
*** aday has quit IRC21:21
*** tristan has joined #buildstream21:22
*** j1mc_polari has quit IRC21:34
*** jcampbell has joined #buildstream21:36
*** jcampbell has quit IRC21:47
*** jcampbell has joined #buildstream21:49
gitlab-br-botbuildstream: issue #445 ("Slow downloads on BuildStream") changed state ("opened") https://gitlab.com/BuildStream/buildstream/issues/44522:04
*** aday has joined #buildstream22:31
*** bochecha_ has quit IRC22:40
*** aday has quit IRC22:48

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!