107th TC39 Meeting

June 20, 2025 · View on GitHub

Day One—14 April 2025

Attendees

NameAbbreviationOrganization
Waldemar HorwatWHInvited Expert
Daniel EhrenbergDEBloomberg
Ashley ClaymoreACEBloomberg
Jonathan KupermanJKPBloomberg
Ben LicklyBLYGoogle
Bradford C. SmithBSHGoogle
Chris de AlmeidaCDAIBM
Daniel MinorDLMMozilla
Jesse AlamaJMNIgalia
Chip MorningstarCMConsensys
Michael SaboffMLSApple
Nicolò RibaudoNROIgalia
Erik MarksREKConsensys
Richard GibsonRGNAgoric
Josh GoldbergJKGInvited Expert
Luca ForstnerLFRSentry
Philip ChimentoPFCIgalia
Christian UlbrichCHUZalari
Mikhail BarashMBHUniv. of Bergen
Eemeli AroEAOMozilla
Chengzhong WuCZWBloomberg
Dmitry MakhnevDJMJetBrains
J. S. ChoiJSCInvited Expert
Keith MillerKMApple Inc
Aki Rose BraunAKIEcma International
Luca CasonatoLCADeno Land Inc
Samina HusainSHNEcma International
Istvan SebestyenISEcma International
Duncan MacGregorDMMServiceNow Inc
Mathieu HofmanMAHAgoric
Mark MillerMMAgoric
Ron BucktonRBNMicrosoft
Andreas WoessAWOOracle
Romulo CintraRCAIgalia
Andreu BotellaABOIgalia
Ruben BridgewaterInvited Expert
Michael FicarraMFF5
Ulises GasconUGNOpen JS
Kevin GibbonsKGF5
Shu-yu GuoSYGGoogle
Jordan HarbandJHDHeroDevs
John HaxJHXInvited Expert
Stephen HicksGoogle
Tom KoppTKPZalari GmbH
Veniamin KrolJetBrains
Rezvan Mahdavi HezavehRMHGoogle
Luis PardoLFPMicrosoft
Justin RidgewellJRLGoogle
Ujjwal SharmaUSAIgalia
James SnellJSLCloudflare
Jack WorksJWKSujitech

Opening & Welcome

Presenter: Ujjwal Sharma (USA)

USA: Perfect, great. Thank you. Then I will start with this and property folks as we go. Hello and welcome to the 107th meeting. It’s the 14th of April. This is a fully remote meeting in the New York timezone. I’d like to introduce to you all to your chairs group you might remember from the last meeting, or if you missed the last meeting, here is some news for you. There’s me, RPR, and CDA, and the facilitators JRL, DLM and DRR. On behalf of all of us, I’d like to welcome you all and kick off this meeting. Make sure you’re signed in. If you’re here, I’m assuming you’re already signed in. It’s perfectly fine if you’re here already, please just go back and sign in. The responses to this forum are really helpful for us to track attendance. The TC39, as you mow, has a code of conduct, and please be mindful and follow it at all times. It applies to this meeting. Since it’s online, there’s not many mediums and chat rooms are governed by a code of conduct. Daily schedule is pretty straightforward for these daily meetings. We start now, which in this case is 10, in New York time, and we finish in go hours, and we have a two-hour session until the break, and then an hour break and another two hours until three in New York time.

USA: A quick rundown of our comms tools before we begin. There’s TCQ, which is by far one of the most unique and important tools that we use for communicating. You should have the link to the TCQ already. And as you can see, there’s the entire agenda there. This is how any individual agenda item looks. There’s a queue and a bunch of things. This side looks for a participant. If you’re a speaker, in is how is view looks for you. I’ll quickly discuss these different options you have. They go from right to left in order of sort of reducing priority, so point of order is the highest priority, that’s why it’s red color. But the important part here is that please use it sparingly, please use it for emergencies such as if the notes we don’t seem to update for you or if there’s some serious technical glitches or if in general, you believe that there’s something urgent enough that the meeting should halt for that to be resolved. Next you have the clarifying questions. These jump to the top of the queue apart from points of orderings, obviously, and in the case, you are basically interrupting the running tune of discussions to ask a clarifying question regarding the current point that’s being discussed. Next you have the discuss current topic, where you add another sort of item for discussing the current topic, so, you know, if there’s any topic that’s going on, you can add another point to that sort of list, so it doesn’t go to the bottom, but it goes to the end of this particular discussion, and then you can introduce a new topic, which puts you at the bottom of the list, so you can start a new topic after the most recent one has been finished. So that’s all for adding yourself on the queue. There’s another button that is only visible if you’re already speaking which says I’m done speaking. Please do not use it in this moment because the problem with this button at the moment is that sometimes we can double click it, so, for instance, if the Chairs are running the queue and you also press this button, you might just skip the next person to you. So because of TCQ’s technical glitches at this moment, we do not recommend using this button. That’s all for TCQ. We also have Matrix. You might enjoy any of these channels. Now, of course, delegates is supposed to be for the most technical discussions, Temporal, quite the opposite. So all these channels are different and have their own sort of vibe, but overall, there’s a group these channels that are dedicated to specific subjects and you might want to be on them. So join the TC39 space on Matrix, and ask us for joining details if you don’t have them. Next is the IPR policy. Basically, this is a quick reminder of ECMA’s IPR policy. Everybody who is a part of this meeting at this moment is supposed to be either a delegate from an ECMA member, or, which which case your organization collectively signed the—and approved the ECMA IPR policy and you’re an invited expert, in case of that, you have done it yourself. If you have not, please contact us or, you know, be aware that your contributions in this meeting are going to be used as part of this royalty-free teaching. So, yeah, I’m not a lawyer myself, but make sure that you have reviewed this and, yeah, observers, on the other hand, by not contributing anything to the meeting themselves, in terms of obviously, you know, spoken contributions are not subject to this. Notes are live. I believe we are being transcribed right now. And remember to summarize key points at the end of each topic. For instance, if you have a presentation and you think you have a pretty good idea what the conclusion is going to be or the summary going to be, feel free to include it in the presentation itself or take that few minutes at the end of to your presentation to go over a quick summary. Actually, I’m suppose odd the read this outtrade, so a detailed transcript of the meeting is being prepared and eventually post in GitHub. You may edit this at any time during the meeting in Google docs for accuracy, including deleted comments which you do not wish to appear. You may request corrections or deletions after the fact by editing the Google Doc in the first two weeks after the TC39 meeting or subsequently making a PR in the notes suppository or contacting the TC39 chairs. The next meeting, the 108th meeting is from the 28th to 30 of May in A Coruña, hosted by Igalia and in central European summertime. Yay for that. And let’s move on to the rest of the agenda.

USA: So first of all, let’s ask for note takers. Any volunteers? Let me switch.

JMN: I can help out. This is Jesse alma.

USA: Thank you, Jesse. Anyone else would like to help out with the notes? The very first slot of the day. And if I may, this is probably one of the easiest ones really given how relaxed the topics seem to be, as opposed to later parts of the meeting where things can get quite complicated.

ACE: I’ll take an easy slot.

USA: Thank you, Ashley. So, yeah, let’s—noted down, yeah, perfect. And move on. Okay, so let’s approve the previous minutes. I’ll give a minute for—well, a few seconds for anyone to mention any thoughts on the previous minutes. a reminder that you can always edit them in the notes repo if you’d like. Anyone?

CDA: Yeah, so the minutes are still not published. There’s a PR out, but the—there’s still a bunch of open, unresolved suggestions. We should direct those folks to just submit, like, just make those commits directly, because like this commonly happens where somebody’s waiting for, I guess, the PR author to approve the suggestions, but they should just feel free to make them, but we should make a point to get this done as soon as possible.

USA: Right. Yeah. Thank you, Chris. I guess in this case, then the previous minutes are part of the PR. We should merge it soon, but since it’s still not merged, you have a great moment to sort of go through it, approve it, if you’d like, or just post any corrections. All right, then let’s say with the previous minutes—that the previous minutes have been approved. Let’s make sure that we merge them in soon. Next let’s adopt the current agenda. So I’ll give a few seconds for folks to raise any concerns about the current agenda. Sounds like consensus, so we have adopted the current agenda. Next we have the secretary’s report. Hello, Samina.

Secretary's Report

Presenter: Samina Husain (SHN)

SHN: Thank you for the start of the meeting, and welcome to everybody. I have a relatively short slide since our last meeting with activities that have taken place. The opt out is open for the ECMA262 and 402 for the officials for the GA, and I’d like to give you a bit of an update on some new discussions we’re having with some new topics and work for ECMA. ECMA has a code of conduct and you can review the invited experts rules. And some of the documents that have been recently published, if you want access to those documents, you just have to ask your chair. Dates for the next Meetings are also noted, Ujjwal already mentioned the very next meeting for the plenary is going to be in May, the next important date for us to be aware of is the June GA, which is the 25th of—June coming up this year.

SHN: All right, so as I mentioned, so very important for the June meeting that’s coming up, we have the opt out period open for the 60 day as per always. It does tend to run very smoothly and I anticipate the same, and there are two additional approvals for both, 262 and the 402, so the 16th edition and the 12th edition. I think they’ve already been frozen for some time, so thank you very much for all of that work and we will proceed to the approval in June.

SHN: The new work that’s going on, there has already been some—a good discussion on forming a new TC, TC57 There’s a question amongst the discussion in the execom. I think we are moving forward well. We are on the second cycle of discussion, it will be excellent to have a new TC in the work items of ECMA.

SHN: Just some other items, just as a reminder, there have been a number of invited experts that have recently joined TC39, not to mention other TCs, as per always, I will review them in the third quarter this year. Many of the new TCs are part of organizations and I look forward to seeing those organizations ready to make decisions to join or how they want to assess their participation and activities with ECMA. I was reminded by W3C about the horizontal review. I’ve left a note that this is still an open discussion, so as TC39 deems fit, we would then come back to them on how to better be involved in the horizontal review.

SHN: I’m going to pause there, because that is the extent of the report based on what we discussed from our last meeting, which was just six weeks ago, and I’ll stop here to ask if there’s anything I missed that you would have expected me to present or you have questions on what I have presented.

DE: Once there is input from the committee, the new TC will give that feedback back to the open-source community so that they can digest it, make a new proposal and everyone can agree on a common standard. I think this could be a really useful tool to unifying the whole community ecosystem. And I would encourage everyone here who is interested to participate. Please get in touch with me if you’re interested or if you have feedback on this idea.

AKI: I don’t think I have any specific comments. I have been asked about sort of our process in collecting information from participants, how we utilize forms and have that data. And that’s something I’m working on and will have something for new the future, but it’s not anything slide worthy at the moment.

SHN: Thank you, AKI. I want to recognize and thank AKI for her work on looking future on some tools, so we understand some of the requirements we’ve had. We’ve just had a meeting on it. So please just be a little patient. We’ll come back with you with some proposals on how we’re going to help improve that, and Aki is going to be involved in that, and also thanking Aki in advance for the PDF versions of the documents once they are approved in June. Thank you. Ujjwal, thank you very much.

AKI: Thanks for to the 262 editors, by the way, for their help in what—in the direction we’re going to go for the PDF. They’ve put a lot of work in as well. Thank you.

SHN: Yes, thank you very much. Ujjwal, thank you. That’s the extent of my presentation. I will be online if there are any further questions.

Speaker's Summary of Key Points

A brief overview of current activities and upcoming milestones was presented:

  • The opt-out period for ECMA-262 (16th Edition) and ECMA-402 (12th Edition), which are both scheduled for final approval at the June General Assembly.
  • An update was shared regarding the progress of new technical work, specifically the ongoing discussions around the formation of a new TC. There is positive momentum within the ExeCom and highlighted that this initiative represents a promising addition to Ecma’s future work program.
  • Reminders were given about Ecma’s Code of Conduct, access to recently published documents, and upcoming meetings, including the next plenary in May and the June GA. Also mentioned a number of newly added invited experts across various TCs, with a formal review of all IE status scheduled for Q3.
  • AKI reported on ongoing work related to information collection for tools and confirmed upcoming contributions related to PDF document formatting.
  • AKI and the ECMA-262 editors were thanked for their continued support and collaboration.

TC39 Editors’ Update

Presenter: Kevin Gibbons (KB)

KG: There have been a fair handful of normative changes, partly because we are in the process of cutting ES2025 and we wanted to make sure we got as many of the outstanding things as we could. So I’ll run through all of these very briefly just so everyone is aware. This first one is is a fairly technical change. It makes it so there’s not a distinction between variables declared with var inside eval vs declarations without a var so engines don’t have the work to keep track of whether something is a var declaration or not, which is just useless work. The second thing was an oversight where when you for awaitover a synchronous iterator and the synchronous iterator is yielding promises, if the synchronous iterator yields rejected promises, then the for-await treats that as an exception, and when iterators have exceptions, you don’t close them. But this isn’t exception from the point of view of the synchronous iterator. It’s only an exception from point of view of the async consumer. So the synchronous iterator should be closed in this case. We had consensus for this literally years ago and were waiting to merge it until there were implementations, which the implementation landed Safari a few months ago, which is why that landed. I’ll have more to say about that later.

KG: We added RegExp.escape. We made another iterator closing tweak where if you pass an invalid value to an iterator helper, that should close the underlying iterator. And we added Float16Array. And then #3559, this was a bugfix—in the process of updating the spec towards merging iterator helpers, we tweaked some of the machinery. In the process, we made an accidental change, an accidental normative change to array and RegExp string iterators where they became observably not reentrant, which was not our intention and not what engines implemented. So ABL I believe, opened a PR to rewrite this so we restore the original behavior. I did want to mention this is a bug fix, and sometimes we backport bug fixes to—when we very recently cut a new edition of the spec that’s still in the IPR opt out. The editors intend to do this unless there’s some particular reason not to. I don’t believe this should affect IPR opt out, especially because the behavior that we are restoring was in fact already part of the specification as of a couple of years ago. So this was strictly a bug fix, but it is technically a normative change. So I just want to give a heads up that will be one errata normative change to ES2025.

KG: Okay. So that’s all the normative changes. There’s been a handful of editorial changes I want to call out that we know how dark mode, thanks to, again, Arai. So you’ll see that if you have your browser setting to preferred dark mode.

KG: And then #3353, I want to call out only because it’s a tweak to the module machinery, the async module machinery, which is extremely complicated stuff. If you work with that, I recommend taking a look at this change, all though it’s a fairly small change I expect you’ll consider it an improvement. If you don’t work with the machinery, you don’t care about this at all. And finally, as Aki already mentioned there's been a bunch of changes towards making the printable document less crap. So it’s looking much nicer now. Thank you AKI and also MF for work on that.

KG: We have a a fairly similar list of the upcoming work, although I wanted to call out we’ve actually gone through, well, mostly MF has gone through and documented the editorial conventions that we follow. It’s currently just a wiki page and there’s a link here if you’re interested in that. This is things like particular phrasing we use or decisions that we make when editing the document that can’t be captured by Ecmarkup. We try to codify as many as we can in code, but of course that’s not practical to do everything. And the last thing of course is just to call out that ES2025 has been cut. Apart from that up with minor tweak I mentioned, the link is on the reflector and there are the IPR has begun towards the GA in June or whatever it is. If you or any of your lawyers have any objections, speak now or forever hold your peace. And that’s all I got. Thanks so much.

ECMA-402 Editors’ Update

Presenter: Ujjwal Sharma (USA)

USA: Anyway, all right, I’ll be very quick. Hello everyone again. This is a brief update are the ECMA 402 editors. As KG mentioned earlier for 262, the new edition is out or, well, it is an opt out. Please check it out. And let us know as soon as possible if you have any concerns regarding, this otherwise it’s good from our end. We have done a bunch of editorial improvements, this is edition that includes duration format.

USA: But here are three big editorial improvements (972, 983, 984), and one is it restructures the unit and sort of style and display landing, and format, instead of having multiple slot for style and display, we have one slot for each unit for options that correspond to that. So that’s a record that contains what the style and the display a bit more structured than it used to be, basically. Then we have cleaned up number format a bit. Some of this is still being sort of discussed. So if you’re interested, please check out that PR, but most of that sort of editorial improvements have been merged and then we have aabstracted away the constructors, the locale resolution part of the constructors into a single AO. So all around, there’s a few different editorial improvements. It should be a lot easier now to make sense of the spec, and, yeah, that’s it for 402. So thanks.

ECMA 404

Presenter: Chip Morningstar (CM)

  • no slides

CM: Yeah, ECMA 404. Well, I looked. It’s still there. USA: That’s as good as it could be, right? CM: Yes, it’s excellent.

Test262 Status Update

Presenter: Philip Chimento (PFC)

PFC: We’ve continued to have many nice smaller contributions from many people. We’ve been chipping away at the large pull request for tests for the Explicit Resource Management proposal, with many thanks to a contributor from Firefox as well. And I think that’s all that there is to report this time.

TG3 Status Update

Presenter: Chris de Almeida (CDA)

CDA: Yes, TG3 continues to meet to discuss security impacts of proposals in various stages. Please join us if you are interested.

TG4 Status Update

Presenter: Johnathan Kuperman (JKP)

JKP: This is a pretty quick update, and just a reminder, the working mode that we’ve been using is seeking out annual approval on things, so we’ve been meeting frequently in the meantime working on our newer features as well as normative changes. Mostly between the previous plenary and today we’ve been working on editorial updates.

JKP: The big one is we converted the TG4 source map from bikeshed to ecmarkup and we’ve added formatting and linting for it as well as improving the experience for dark mode users.

We’ve made a few normative updates. There was a typo in the decoding algorithm. A reminder these slides and the links are linked in the agenda. We had a typo in the VLQ decoding algorithm and another issue where the continuation bit for the code decoding VLQs. We also moved our algorithm examplesto the ECMA “syntax-directed operations” grammar.

As far as our proposals, we’ve been just continuing to work on range mappings and scopes. For range mapping, we have a few small changes like a allowing multi-line mapping and for scopes, we have more work and we’ve got larger PRs discussing how to futureproof scope encoding and decoding as well as where to use relative and use absolute indices.

TG5: Experiments in Programming Language Standardization

Presenter: Mikhail Barash (MBH)

MBH: We had a very successful workshop at the plenary in Seattle. We had 21 attendees, I think from 11 different organizations. We continued to hold monthly meetings. And we are currently arranging two TG5 workshops. The one which is confirmed as of now is in A Coruña the day before the plenary starts, so the 27th of May will be hosted at the University of A Coruña and they have prepared some presentations for us, and I will also post later in the refactor and in the Matrix channel a call for presentations from the delegates if you want to give some presentation at that workshop. And we are currently planning a TG5 workshop in Tokyo for the November meeting.

MBH: One more thing, the outreach: there will be a workshop on programming languages and organization and specification, which will be co-located with the European conference on object-oriented programming which will be held in July in Bergen, and the keynote will be on WebAssembly spec tech, the mechanized approach for the web assembly specification, and I would like to bring your attention to this. We encourage you to submit proposals for talks. It’s a 300-word abstract, and the links will be shared in the reflector and also in the Matrix channel. So please consider submitting. That’s

Updates from the CoC

Presenter: Chris de Almeida (CDA)

CDA: There are no updates from the CoC committee. There is nothing new to report. As always, remind folks that we are always welcoming new members to the CoC committee, so if that’s something you’re interested in, please reach out.

Normative: add notation to PluralRules

Presenter: Ujjwal Sharma (USA)

USA: This is my presentation about small normative pull requests that we made on ECMA 402. I’d like so to quickly introduce it and by be end for presentation, hopefully you’ll have enough background and sort of confidence about this that you would agree to putting this into ECMA 402. So the title says notation support for PluralRules. What does that mean?

USA: Okay, so here was the problem, right? Intl.PluralRules , if, you know, going for the initiated, this is a constructor on the Intl object that is slightly different from all of the existing constructors. While it there’s a bunch of these formatters, DateTimeFormat, NumberFormat, you know, we add formatters, we love formatters… this is actually an API that does selection, so it’s a bit more of an interesting building block. What it does is it exposes the locale specific pluralization rules to the user, so you could input a number and ask, you know, for any given locale, what the plural category is going to be for this. Now, for English speakers, this doesn’t sound super impressive given there’s only two. Languages like Spanish, for instance, have three, there’s a separate category for bigger numbers, for example; but there’s a lot more complex languages that can have up to five or six plural categories so it can be quite an involved process to build an application that takes all of these into account and in a way that works across locales. That’s what plurals does.

USA: The problem is it doesn’t take the notation into account. Why are notations important? I give a quick history lesson on this. Notations weren’t originally in NumberFormat, but they were kind of one of the more frequently requested topics, so in May 2018, and I know that these kind of timings can be complicated, but I say May 2018 because of this issue, shutout to SFC by the way, for the heavy lifting. Spanish has a third category for “millones”, and every time you are in the millions, there’s a different plural category.

USA: Fun fact, but, yeah, so in May 2018, unified NumberFormat added this notation support to NumberFormat. This means that NumberFormat can now format numbers in scientific notation or other sort of compact notation. This was nifty, right, and pretty much right away, or let’s say in two years, but, yeah, we wanted to support them in PluralRules too. It looks like it’s long time hases a pasted because unified NumberFormat took a while to happen, but as you can see, this unified NumberFormat was still not merged. The idea was once unified NumberFormat was merged and it would have notations for it, we would simultaneously start supporting number notations in groups. It somehow slipped through the cracks however and it doesn’t happen. But the idea was, you know, something as simple as this could be accepted, and given that notation was, you know, something that was already being supplied to number format, a similar options bag could be used for both.

USA: So, yeah, not only should PluralRules support notations, but it should probably stick to the same options that NumberFormat does. And thinking of a solution, sort of more recently, I thought, well, if we have a notation slot on the PluralRules object, then we can just pass it to ResolvePlural, and given that this operation is not really specified, I mean, it’s implementation-defined, so to say, the final result is that, you know, we just need to start storing this information and passing it into the AO and that would pretty much be it.

USA: Now, while the, you know, I call it a minimal solution, the PR, it is quite minimal by normative PR standards as well, which is why I don’t think that it deserves to be a proposal by any shot, but condensing it even further, you know, removing, for example, the part where is I add the new slot, I put it in the constructor, I put it in the list of slots, this is the change, right? Like, in the spec, you would perform said NumberFormat options in AO with these options in standard, and standard here is a notation. So this is an AO that is being shared between number format and plural rules. Now what we’re doing is we are getting the notation from the object—or from the options object, sorry, we are setting the internal slot that I talked about earlier, and then we call this. So we perform NumberFormat options with the notation instead of it being standard. Standard here being the standard notation as well. So, you know, the default is still standard.

USA: There’s a few options here that I clicked out for readability, but, you know, the standard engineering, scientific compact, all of these options are available for notation. And in April 2025, which is, you know, slightly less than two weeks ago, we got approval from TG2. So here I am. I hope that this was, you know, informative enough and that you all feel confident. But I would like to ask now for consensus.

DLM: Yeah, we support this normative change.

DE: In change sounds good to me. I think we should treat this similar to staged proposals in terms of merging it once we have multiple implementations and test. We could track PRs like this. Anyway, this seems like a very good change to me.

USA: Just FYI, we have tracking for everything, basically, sorry, for all normative PRs for ECMA 402, but noted. tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking#ecma-402-prs

DE: Okay, great.

CDA: Awesome. That’s it for the queue, so it sounds like you have support. Are there any other voices of support for this normative change?

USA: Awesome. I thank you. And I have a proposed conclusion for the notes, so the conclusion, normative pull request ECMA 402 was presented to the committee for consensus and this PR added support for a notation option in the plural rules constructor for handling different non-standard notations.

DE: Do you want to say the part about how we had consensus?

USA: And yeah, and with I guess a couple of supporting opinions, we achieved consensus for this pull request.

Speaker's Summary of Key Points

Normative pull request tc39/ecma402#989 on ECMA 402 was presented to the committee for consensus and this PR added support for a notation option in the plural rules constructor for handling different non-standard notations.

Conclusion

The committee reached consensus on the pull request, with explicit support from DE and DLM.

Normative: Mark sync module evaluation promise as handled

Presenter: Nicolò Ribaudo (NRO)

Slide

NRO: I’m presenting a pull request, fixing a bug around module promise rejection handling. Just a little bit of background, how does Promise rejection track work? And what is the problem? Rejection tracking is basically the machinery that lets us fire some sort of event when you have a promise that gets rejected, and then it gets handled. For example, browsers do this through an unhandledRejection event. So how does this work in detail?

Slide

NRO: Well, whenever you reject a Promise, either through, like, calling the reject function from the constructor or using Promise.reject, and also for promise created internally by the spec and rejected: if, when the promise gets rejected, it’s not handled yet, so if it does not have a callback registered through .then or .catch, then we call HostPromiseRejectionTracker.

Slide

NRO: And then later when you actually handle the promise, so when you call .then or .catch, it will tell the host, “now this promise has been handled”, and the host can tell the event is going to fire or do whatever they want to mark which promises are not being properly handled.

Slide

NRO: So that was Promises, and how does this interact with modules?

Slide

There are multiple types of modules in this spec, or well Module Records, which represent modules. There are a Module Record base class and two main types of actual Module Records. There are Cyclic Module Records andSynthetic Module Records. Cyclic records are modules that support dependencies. And this is some sort of abstract extract base class and our spec provides Source Text Module Records that are variant for JavaScript. For example, the web assembly imports proposals in the WebAssembly is proposing a new type of cyclic on the record, and for synthetic module records, and it’s just modules where you already know the exports and you have to wrap them with some sort of module to make them importable. The way module evolution works changed over the years. Like, originally there was this Evaluate method that would—it was on all module records, and it would trigger evaluation, and if there was an error returned a throw completion, otherwise a normal completion. But then when we introduced the top-level await, we changed the method to return the promise with the detail that only cyclic module records can actually await. If there’s any other type of the module records, like any type of custom host module, there’s a promise in there, returned by the Evaluate method, and this promise must already be settled. So the promise there is just to have a consistent API signature, and not actually used as a promise.

NRO: And given that this promise is going to be already settled in the module revelation machinery, we just—whenever we have a module record that’s not a cyclic module record, we just look at internal slots of this promise to see if it’s rejected and extract the value that it’s rejected with. You can see from here, we only use promise.[[promiseResult]] to get the value that is promise, and it’s normal, and we look at its internal state.

NRO: And this causes a problem. Because given that we’re not reading this promise using the normal promise abstract operations, when this promise is created by the host, if this promise rejects, it will call host promise rejection and the host, hey, this promise is not handled and it’s rejected, and then we never tell the host that the promise has been handled because we never called the promise down, which is the AO responsible for calling the host hook. So the host doesn’t know that it actually here we took care of this completion. So I have, like, for example, these three modules in this. We have a module that does dynamic import for a.js, and it depends on some B module. This B module is not a JavaScript module on the record. It’s a matter on the record and managed by the host. It creates a promise, rejects and it returns the promise as part of its evolution, so when it’s rejected promise, it calls the host promise rejection trigger hook telling that the promise has been rejected.

NRO: Then during the evaluation of a.js, we perform the steps from the slide before, and we look at the error and we do not call HostPromiseRejectionTracker—oh, here it says "rejected", it should be "handled" in the promise hook. And then we create—in the meanwhile, dynamic import creates another promise for the evaluation, not just of b, but of the whole module graph for a.js, and in the module on the left, we handle this other promise, so the promise for the whole graph A is handled and we never handled the promise for module B.

NRO: So the fix here is to just change these InnerModuleEvaluation abstract evaluation to explicitly call the host hook that marks the promise as handled when we extract the rejection from the promise. And, well, editorially, I’m doing this as a new AO because it's used by the import defer proposal, and we’re going to have it inline in the Module evaluation algorithm.

NRO: Are there observable consequences to this? Yes and no. Technically this is a normative change, as example before, this is observable because it changes the way host hooks are called, and usually they affects how some events are fired. However, on the web, the only non-cyclic module records we have are syntactic model records and we already have the values, we already—we’re just packaging them in a module after creating them, so that promise is never rejected, and this is not observable. Outside of the web, we have commonJS, and when you import from a .cjs file, it would be wrapped in its own Module Record and we evaluate the particular CJS module in the .Evaluate() methodevaluation of the module record. However, NodeJS does not expose as rejected through their rejection event the promise for that internal module, because maybe they don’t actually create the promise, and don’t know how it’s implemented. So Node.js already implements the behavior that would be—that we will get by fixing this. Node does not implement the bug. So, yeah, to conclude, is there consensus on fixing this? There’s the pull request (#3535) already reviewed in the 262 repository.

MM: Great. So I’ll start with the easy question. The—you mentioned the situation where the promise—there exists a promise that when born is already settled, and I understand why, and it all makes sense, I just want to verify that it does not violate the constraint, the invariant that user code cannot tell synchronously whether a promise is settled or not. That the only way—the only anything that user code can sense is asynchronously. It finds out that a promise is settled. Is that correct?

NRO: It’s correct, because the way you can check this is through the dynamic import, and you get a promise anyway. And also this promise is not a promise that was provided by the user. And it was just a promise that was provided by a spec to a spec.

MM: Great. And the concept of internal promise or promises which respect fictions leads me to the more interesting question, which is the one that MAH posted on the PR that you already responded to. Could you recount that, and then I’ll respond after that.

NRO: Yes. So MAH was asking if internal spec promises are observable to the host hook. And I believe unfortunately the answer is yes, because if you reject a promise, it will call this host hook, and it’s just the host that will have to know, "oh, this is an internal promise, let’s not give it to the user", which I know is not the answer you’re hoping for. And it’s not just this specific module case, it’s about all internal spec promises.

MM: You’re right it’s not the answer I’m hoping for. It’s up—it’s only being directly made observable to the host hook, and it’s only being indicted observable to JavaScript code according to the behavior of the host hook. The problem is that right now, the behavior of the existing host hooks for this do reflect it back to Javascript code, these internal spec promises do get [INAUDIBLE], and as do promises that can be observed by JavaScript code, and I’ll just say, we’re rather aghast at the idea of the spec causing what were spec fiction concepts to need to be reified as promises that become observable by user code.

NRO: Yeah, I guess I agree. I don’t know if hosts actually expose any of these promises, though. I didn’t check, outside of this one use case.

MM: Okay.

MM: The promises that are fieldwork fictions in the module machinery, remain unobservable and do not—are not reported as either handled or not handled or just not recorded?

NRO: Are you asking me or in general to the committee? I feel like this is a larger discussion.

MM: I am asking you first, I think it could be.. I think we should I just don’t know in depth. If you think we should and could I recommend that we do.

SYG: I have a clarifying yes, and what is a spec fiction promise in this case? Is it something that is synchronously accounted—like you can write an async loop can count how many of you went back to the microtask queue and that is observable and with so which tick is scheduled or not scheduled becomes observable when you basically lead it with like a for await loop that will count many.

MM: That is an interesting intermediate case and thanks for raising that and I was not thinking about that. And so, now what I think of as a object which is a spec fiction verses is whether the user code itself can get access to the object and get connected to the object. Does the object become nullified? And an object that is only observable behaviour, observable by user code is additional ticks on the queue, those could be explained by just advancing the ticks by a means or it could be explained by promises with spec fictions and still call them a spec fiction and same way that other objects are spec fictions. And they have observed effects which is why that we have them at all, but your code can never get a hold of them. And the regional example that I became aware of from the distinction is the sync to async adopter thing which is only ever itself explained as an additional object but there InOrder Successor way for user code to to get ahold of the object.

SYG: I think remedy in this case would be by because you would like—because the question you had asked NRO can aged rid of them and can you do to so, and to clarify your preference would be if you can get rid of them by getting rid of them, you mean remove the spec from even constructing such promises but keep the observable behaviour the same with other explanatory means?

MM: That would satisfy my constraints and I am not suggesting that we need to do that. If there is some other means by which the spec fiction promises can distinguished by this PR so that the spec promises reject or not reject the status and not reflect to the JavaScript service code that would be satisfying and the mildest thing that you would be satisfying, and I am not sure that I am happy with this, but I will suggest it to put it on the table. Which is sense it depends on the behaviour of the host hook, whether to reflect the report back to JavaScript code, simply making the spec promise observable to the host is not yet in violation of the language in variant and that will leave it to the host whether or not to violate and the path of resistance if we just accept this PR and in nonnormative note is that hosts will reflect the spec fiction promises back to the user code, the way they reflect or other promises back to user code. So if somehow we were able to make clear that we are advising hosts not to reflect these back to the user code, and providing in the host hook, enough information for hosts to make that decision, that would likely satisfy the concern that I have. And—

CDA: Okay a quick one of—we are about 8 minutes passed time for this topic.

NRO: In this specific case the promise is provided to us by the host in the Evaluate() method of the module record and so we don’t know if the promise is fictional or a promise that is supposed to be used in some other way and it is almost created by this instance.

MM: I understand that technically, but do we in terms of what the practical status quo on the ground is, do we know any host behaviors where these particular promises do get exposed to JavaScript user code? Other than by the rejection tracking?

NRO: I don’t know.

MM: Okay, so let me say, I am in favor overall of the direction of this, but I do feel like with us being out of time, I need to withhold consensus until we observe this Issue.

NRO: Let me see if we can talk and come back to the meeting on the last day.

MM: Okay.

KG: MM, I cannot imagine any outcome here where the particular behaviour being changed isn’t part of whenever it is that you are looking for. So it seems like we have consensus on this particular change though there is other changes that you like as well, and the change here is a change is a change to a particular piece of behaviour.

MM: Okay that is a very good point, that is a very good point. And the things I want to not to be observable are already observable just with the wrong cracking and we are fixing how these inappropriately observable promises and fixing how they are tracked rather than fixing whether they are observable, is that correct?

KG: This is causing them to be tracked in a way that results in not observable in practice, even though in some sense they are actually observable to the host.

MM: I am not worried about a malicious host, the issue is a host—existing host and the host that follow path of this resistance and implementing this once it is in the spec, and you are causing inadvertent observability in these promises. And so, so, yes, we might agree to consensus during this theory and I will withhold now for one final reason which is simply, this objection is raised by MAH who cannot be present at this moment but will be present during the plenary. So, if MAH—to KG given your point, if MAH agrees, I am happy with consensus.

CDA: We are at time and we need to move on and so SYG is on the queue.

SYG: Thank you NRO on a very clear presentation on the problem and a lot of machinery is messy and this is extremely clear, thank you.

CDA: Yes DE is talking about a follow up topic and yes we can schedule a continuation for this.

Summary

  • When using some types of non-JavaScript modules that throw during evaluation, the current spec does not call the HostPromiseRejectionTraker hook to mark the promise returned by .Evaluate() as handled.
  • The normative PR fixes it by explicitly calling the host hook.

Conclusion

Explicit support from multiple TC39 members including SYG. Blocked by MM due to a concern from MAH about spec-internal promises being exposed to user code through host hooks; follow-on topic to continue later

Note about changed behavior of Array.fromAsync after landing #2600

Presenter: Kevin Gibbons (KG)

KG: Okay, let’s see, so, all right, so I mentioned during the updates and we have this very old PR (#2600) and to recap what this PR does: When you have a for await loop that iterates over a synchronous iterator or iterable that will yield a promise that will reject—the original behaviour was that the for await loop would treat that as an async iterable throwing which is a violation of the iterator protocol. Which is to say that the loop assumes the iterable has a chance to do any cleanup that it needs to do before yielding such a promise. And this is not the case for sync iterables. And so to ensure the synchronous iterator will have time to clean itself up, the change here was that now we close the iterator when it yields a rejected promise. The wrapper which is lifting of the sync iterator to an async iterator checks if the sync iterator yields a rejected promise and closes the underlying iterator, on the assumption that the consuming for await loop would not close the iterator itself.

KG: And that is a very good change. There is an invariant we are supposed to close iterators 100% of the time when we are done with them, and this is a necessary change to achieve that.

KG: So there is also an outstanding proposal, Array.fromAsync, which is Stage 3, although I do believe it does have implementations in all browsers, which is basically a for await loop which will collect values. And it in fact uses the same spec machinery as for await loops. So when we made this change to the machinery for for await loops, it affected the behavior of Array.fromAsync when consuming a sync iterator which yields rejected promises.

KG: So this PR had the consequence of the behavior of Array.fromAsync changing. It's not obvious from looking at the PR because Array.fromAsync is not in the specification, and it is not obvious if you are looking at Array.fromAsync because nothing has changed from Array.fromAsync. But we changed a bit of the machinery Array.fromAsync was using, and the machinery was not in the same place as the thing that was using it, and so I wanted to put that on the agenda to call out the distinct change that happened so no one is surprised.

KG: I believe the champions are in the process of getting tests written for this behavior, and I don’t know if there was a test for the old behavior, and it hopefully should be a straightforward change, and in some engines, they might have been using the same machinery internally as well, and it might have gotten fixed automatically. But this is a heads up about this weird case where we made a number of changes to the machinery that the proposal was using, and that changed the proposal, and I don’t know if that has come up before. Yeah, that is all I had to say.

JSC: Like KG said, there is a test pull request for test262 already open. This is a testable observable change. V8 already should pass it, while other engines I tested do not yet pass it. Also, work on Array.fromAsync has resumed. Hopefully it will reach Stage 4 within the year. That is all.

USA: That was it for the queue. KG, would you like to conclude?

KG: There was no request for consensus, so that's all.

USA: Yeah. All right, I guess that is it then.

Summary

The committee is advised that landing tc39/ecma262#2600 resulted in a change in the behavior of the widely implemented Array.fromAsync proposal despite no changes in its spec text. Test262 tests have been updated at https://github.com/tc39/test262/pull/4450.

Conclusion

This was just a notification to the committee; no consensus needed

AsyncContext Stage 2 Update

Presenter: Andreu Botella (ABO)

ABO: So, so this is an update on AsyncContext, focusing on the use cases and some updates from the web integration and after some negative feedback that we got from Mozilla.

Slide

ABO: And first of all, on the use cases. When we were talking about the proposal previously, the things we were focusing on were: AsyncContext is a power user feature meant for library authors (such as OpenTelemetry maintainers) and not so much for the majority of the web developers. And one use case is enabling tracing in the browser, which is currently only possible in Node.js through AsyncLocalStorage, or other runtimes that implement it such as Deno or Bun. With AsyncContext, this would be possible in the web as well.

Slide

ABO: And all of that is correct. However, there are two clarifications on the use cases that we have not made as strongly as we are making now:

  • AsyncContext would be used by library authors to improve the user experience of library users.
  • And that AsyncContext is incredibly useful in many front-end frameworks works, regardless of the tracing use case.

Slide

ABO: And so we have actually had some conversations with some frontend frameworks, and we are covering here some of the highlights.

Slide

ABO: The current status in frameworks is that you have some things where you have confusing and hard-to-debug behaviour. And, for example with React, if you have a async function as the transition callback and you have an await inside of it, anything after the await gets lost and is not marked as a transition. And in the React documentation it says this is a limitation of JavaScript, and it’s waiting on AsyncContext to be implemented to fix this.

Slide

ABO: Another thing some frameworks do to avoid this is to transpile all async code. This can be as simple as wrapping await with this withAsyncContext function, in the case of Vue. And that will let them deal with things, but you need to transpile everything through the entire code base, possibly including third-party code.

Slide

ABO: So about the use cases for certain frameworks: React has transitions and actions. If you have async inside one of those, React would need to understand that it is a series of state changes that should be coordinated together into a single UI transition. The alternative for this is having developers to pass through a context object to every related API, which would be easy for them to forget, or compile everything which like transpile everything, which for React would be invasive and a non-starter.

Slide

ABO: In the case of Solid.js, they have a tracking scope and an ownership context. Since this is a signal-based framework, they use this to collect nested signals and handle automatic disposal of them. And if you have await in them, you will lose both contexts.

Slide

ABO: For Svelte, on the server they have a getRequestEvent function to read information about the current request context. They’d also like to have a similar thing for client-side navigations, but that is currently impossible. Once again they could do this by transforming await expressions—again, transpilation—but they can only do that in certain contexts, which would lead to confusing discrepancies.

Slide

ABO: In the case of Vue, there is an active component context which can be propagated with await, but it only works when you have a build step with Vue single-file components, and not in plain JS.

Slide

ABO: If you have any cases that are relevant to front-end frameworks, and like to share them, please jump on the queue. It would be good to share them to convince implementers that this is really useful and would be worth the complexity.

CZW: I would like to highlight Bloomberg’s internal use cases. We have an internal application framework called R+ and we actually use a mechanism to instrument the internal engine so that we don’t need to transpile user code and we can run multiple application bundles in a single JavaScript environment. We call this co-location, and this allows us to save resources and improve performance, given that we don’t have to create a bunch of new environments for each application bundle, and there is no RPC between them.

CZW: In order to support colocation, we use this internal mechanism, which is similar to AsyncContext, to track callbacks and promises created by each application bundle and we use this context information to associate app metadata. And this is crucial for us to improve our web application and developer experience because they don’t have to pass any of this application metadata around to support our co-location feature. So this feature is really important in Bloomberg’s cases.

SHS: Google uses a polyfill of this for interaction tracing and, secondarily, performance tracing. It's critical for us because we use frameworks with a lot of loose coupling, so that there aren't a lot of direct function calls where you could expand the parameters to pass additional tracer data explicitly. Examples of this kind of loose context would be event listeners, signal handlers / effects, and RPC middleware. In all these cases there is no way to pass tracer data explicitly. Beyond that we are hoping that once the proposal is further along, we have a number of other possible use cases like cancellation where you could have an ambient AbortSignal, would be a really useful thing to have but that is lower priority and so we're less interested in taking quite as big as a risk for using that while it is still experimental.

Slide

ABO: Thank you for sharing your use cases, and now I will give an update on the web integration.

Slide

ABO: So the last time that we presented this in full was in Tokyo, and we gave a brief summary of the changes since then in December; but basically, one of the things that Mozilla highlighted for this proposal was that it increases the size of potential memory leaks.

ABO: If you have this in the web, this code used to only keep alive the callback and any scopes that closes over. If there can be any click event, the callback is not a leak, and for the scopes it closes over, it is only a leak if it keeps alive things that are not used by the function. And I know that sometimes engines keep more things alive than they should for closed over scopes, but that is a trade-off they make.

ABO: In the proposal as we presented it in Tokyo, addEventListener implicitly captures an AsyncContext.Snapshot, and a lot of the entries in the snapshot, a lot of those values will not be used by the callback, even if the snapshot itself is used, so this could be a leak—or will be a leak in most cases.

Slide

ABO: And so the proposal has moved towards a model where the context always propagates from the place where the callback is triggered. So here you have a click() method on HTMLElement which causes a click event to be dispatched synchronously. And as part of that click, that propagates the context from the click call to inside the callback, and it only stays alive if the event listeners are dispatched and that is it.

ABO: If you have events that are dispatched async, like on an XMLHTTPRequest object, when you call send() that context will be stored for the duration of the HTTP request, and when it fires the final event it can be released. So this is where we are calling the dispatch context.

Slide

ABO: For some APIs there is no difference and the callback is passed at the same time that the work starts which will eventually cause it to trigger. The simplest example is setTimeout: in the old mental model, you pass the callback into the web API and thus it captures the context. In the new mental model, setTimeout starts an async operation to wait and then call the callback, and it propagates the context through that async operation. The behavior is the same, and it’s just like that for any APIs that take a callback and schedule it to be called at a later point. They will have the same behavior, so we can think of it with the new mental model.

Slide

ABO: And for any API’s, the new behaviour should be what you would get if the APIs were internally implemented in JavaScript using promises and no manual context management. You can have an implementation of setTimeout that does a sleep and then calls the callback, and this would have the same behaviour. And if every API works like this, if we make all web platform API’s behave like most other APIs that developers will interact with, it will reduce the cognitive overhead of having to think of the context.

Slide

ABO: Now, in some cases execution of JavaScript code is not caused by other JavaScript code, and then there is no context. So if you have a user click that will trigger the click listener, then there is no context, because the source of that event does not come from JavaScript but comes from the browser or user. And this would be the case for events coming outside the current agent. And in this case, this JS code would run in the “root context”, with all variables set to their initial values, and the same as if you start an agent—the context that it would have at first.

ABO: Now there are some cases where you have regions of code—for example, in the case of server-side, to track a particular request—and you want to identify the different regions of code. And if you have something like one of these events that run in the root context, it would lose track of which region it’s in. So because of this we have a scoped fallback mechanism to provide fallback values which would be independent for each AsyncContext.Variable. And you have an API that would set this for each AsyncContext variable and it will store the value at that point, and it would be set for any event listeners that are registered within that region. And so the context would have all variables set to their initial values except for the variables which have fallback values.

Slide

ABO: And here you can read more details about the web integration or the memory aspects of the proposal.

SYG: So I did not quite understand, clarifying question. I did not quite understand how the new mental model keeps working for the setTimeout? Maybe it helps more if we go to the proposal slide (17), could you explain if the callback is no longer a thing that captures the AsyncContext at the point when addEventListener is called, something still has to propagate the original AsyncContext. How can the behaviour not be changed from the current mental model if the callback no longer captures the AsyncContext?

ABO: Because in the Ecma-262 part of the proposal, await propagates the context from before the await to after the await.

SYG: Oh I see. And my follow up here—maybe just walk me through, how this meaningfully reduces the time which context would be kept alive based on the leak concern?

ABO: In the previous proposal for web integration which we covered in Tokyo, calling addEventListener with the callback would store the context that was the current context when addEventListener was called, and that would stay alive forever unless you called removeEventListener with the same callback.

SYG: Does it object to the setTimeout—the click thing I understand because you changed it to propagate the root context instead of capture context and you just remove the capturing, but does this behaviour change setTimeout?

ABO: For setTimeout there is no difference, but have this here because we are describing this with a new mental model, and with this mental model, the setTimeout behaviour is the same.

SYG: That makes sense.

DLM: First off I would like to thank the champions they have done in putting together this presentation and reaching out to people that are involved in frameworks and so with that being said, we still continue to have some concerns around web integration and I mean, it seems at least you know our concern is it’s going to be a large amount of work to implement. And I think the use cases are better stated now and I don’t know if that has fully changed our calculations in terms of whether use cases justify what we see as a very large implementation. One thing I do kind of see in the frameworks use cases, it does not appear that people necessarily are looking for web integration of APIs and this is kind of like the basic—not basic but the more linguistic JavaScript functionality, and with that being said, I think that we sort of represented in our point of view and not necessarily if this meeting but from other implementers and see if they share the concerns about the amount of work that might be involved in the web integration.

ABO: I think SHS’s was one example of a framework that did need the web integration if I understood correctly?

NRO: Yes, so, thank you. DLM, is there any suggestion of how to change the web integration? Knowing it would make it easier to adapt it the right way, if we know what the right way would be.

DLM: I don’t have a specific suggestion. There is some work done to address our concerns of the memory leaks but I think there is issue on the queue as well and I feel like we are going to have to change a very large number of APIs and there would be you know there is like with the two different potential context and this is like a manual process that we are going to have to change, a lot of it, yes. I don’t have a simple suggestion of how this would work.

SYG: That sounds confirmation to the answer on the queue that a lot of work is on the number of API’s that of implementation that needs to be context aware.

DLM: Yes that is correct and at least in our initial analysis, it feels like it was a case by case basis rather than like one you know—it was not just one place to change things, it feels like we would have to do things not for individual API but in a number of different places depending on the type of API.

SHS: I don’t remember quite what you are thinking about in terms of web integration, but I will say that we do want to make sure that the context actually propagates coherently across both language built-in’s and web API’s.

DE: Definitely a goal for the web integration design was to be consistent and hopefully, could be just kind of mostly inferred from WebIDL. All of the stuff of falling back to the root context is a simplification versus previous versions, and is towards the “doing nothing” direction, away from trying to solve all of the things. I hope that Igalia can show this in a generic way, rather than implying per-API work. That work on generic framing has not been completed yet, but that is the direction. We have a principle that everyone can intuit the behavior. In writing the spec, it could be centralized in like one or two places in web–ideal and for implementation. That is the goal, and I guess presenters are being conservative now because it has not been totally proven out, but I understand it would be necessary to meet those criteria before this can be accepted. DLM, if the context were propagated in this regular way, would that make it acceptable? Is that the kind of thinking you are looking for?

DLM: I am not sure if it can be done that way, but yes, I think that would address a lot of our concerns.

NRO: So it is not possible to do in general, if it is possible to do for setTimeout but not for the event. It can be done semi-automatically in specs, but not through something you can auto-generate with WebIDL.

DE: If Events can be changed in one way then that still meets the criteria that I am saying. So let’s think offline. This logical principle that developers can follow and that spec writers can follow is a positive step. I am looking forward to seeing how it is proposed that you update spec. I know this is something you have been working on, and I am looking forward to seeing it.

SYG: DLM, to your earlier question about position: currently is so in the beginning, we have the Chrome, and shared a lot of concerns about memory and about complexity and not just leaks and you would need to map a word to keep the context and maybe like a tree of context, so the leak of the usually implementation concerns but currently we are positive despite those concerns and I don’t think those concerns have gone away and we are engaged and despite the concerns and in net and any API is there net in the browser and ABO and the champions and testimonials and it is truth that the each of API is small but the framework and library and products that are ego are to adopt this ASAP and the reasons they are adopting it is explicitly for components of liability and currently it has a pretty wide reach of users of the web and because of that alone, I think it is worth positive on it and the amount of work granted and I cannot see I am happy either but I am happy to see the way it is going and produce it and not hurt the primary goal. Our position to reiterate is positive and the pay off here, I think demonstrated to be not species and there is multiple people on the record that would say we will adopt this which is relatively rare for the things that we are pushing.

DLM: Thank you.

CDA: That is it for the queue.

ABO: Yup, so, this was basically it. This was the Stage 2 update, and you can read more details on these links. Thank you.

Speaker's Summary of Key Points

This presentation focused on two main updates, addressing part of Mozilla's negative feedback about the complexity of the proposal and lack of use cases:

  • feedback from frameworks, about their use cases and about their need for AsyncContext to improve the DX for their users
  • some changes to the web integration to reduce the amount of snapshots that get captured and kept alive for too long

Conclusion

Multiple frontend web frameworks are eagerly waiting for AsyncContext to ship in browsers, to enable async/await in developers’ codebases without breaking framework-level tracking. However, while the use cases have been found convincing, it's still not clear yet that they are worth the implementation cost required by the proposal’s web integration. Different browsers have opposite opinions about this tradeoff.

Temporal Update

Presenter: Philip Chimento (PFC)

PFC: One day early which is something you can calculate with Temporal! My name is Philip Chimento and I work at the TC39 member Igalia, and we are doing this work in partnership with Bloomberg. I brought the news last time that Temporal is shipping in Firefox and it is available in nightly builds now. There have been some open questions raised about how to coordinate specifically the behaviors that in the spec we call "locale-defined." We are making sure that those are sufficiently coordinated between implementations and TG2 is addressing those questions. We will continue to analyze the code coverage and answer any questions that implementations have.

Slide

PFC: I have this graph every time showing the percentage of test conformance for each implementation that has implemented Temporal. We added more tests since last time so the baseline goes down slightly. But looks like GraalJS and Boa—particularly Boa—have made specific gains in conformance to the spec. Some of the bars have gone down by imperceptible amounts but the graph looks on the whole fuller than it did last time. Obligatory note that this is not percentage done but percentage of tests passing.

Slide

PFC: I wanted to highlight some new information about the use of BigInts in the spec. Previously there were concerns about this, and I showed in a previous presentation that you do not need to use BigInts internally, but you can use 75 integer bits and divide that however you like over 64 or 32 bit integers. I ran across an interesting paper recently which is cited here, and did a quick proof of concept that represented epoch nanoseconds and time durations each as a pair of 64-bit floats. So you don’t have to deal with nonstandard size integers. Just give me a shout if this is interesting to you for your implementation. There is a proof of concept in JavaScript using two JavaScript Numbers, and it does all the necessary calculations correctly, including the weird extra precise floating-point division in Temporal.Duration.prototype.total .

DLM: I just wanted to say that we're planning to ship this in Firefox 139.

SYG: What is this locale dependence?

PFC: This one specifically is about the era codes that CLDR provides. I can link the issue if you want to read up on it.

SYG: I am wondering given that this—like all of Intl depends on locale data, what is special about this case?

PFC: Let me pull up the issue. So there are a couple of issues in the Intl Era and Month Code proposal, which is a separate proposal that we hope to present at the next meeting. One of the issues is where the year zero starts in the eras of various calendars. Another one is the constraining behaviour for nonexisting leap months, which is calendar dependent. These are things that CLDR does not necessarily define currently, and it should. So the issue is agreeing on the behaviour that CLDR should have so that gets reflected in the various internationalization libraries that will get pulled in by the implementations. (tc39/proposal-intl-era-monthcode#32, tc39/proposal-intl-era-monthcode#30, tc39/proposal-intl-era-monthcode#27, plus various bikeshedding threads about updating the era codes provided by CLDR)

SYG: I see, makes sense.

PFC: If there are no more questions, I think we can conclude and I will put a summary in the notes.

Speaker's Summary of Key Points

  • Firefox 139 will ship Temporal.
  • Boa and GraalJS have substantially increased their conformance with the test suite.
  • There's a proof of concept available for doing all the BigInt or mathematical value calculations in the spec, using a pair of JS Numbers.
  • TG2 is discussing some locale-specific behaviour in the Intl Era and Month Codes proposal.

Conclusion

Temporal is at Stage 3 and ready to ship

Composite Keys for stage 1

Presenter: Ashley Claymore (ACE)

ACE: So hi, I am Ashley. I am one of the Bloomberg delegates and I am excited today to be actually proposing something today. I have presented, I think, three times on this design space, never proposing anything, just trying to share my current thoughts and to elicit feedback. And particularly, based on the feedback and the conversations we had in Seattle, I felt like the time had come for a proposal, and then here we are.

Slide

ACE: So this follows on very much from the previous presentation I gave in February, and the ones before that. So I don’t want to recap too much stuff from those. I will do my best to make this accessible to as wide a group as possible, but I would encourage people to look at the previous slides if they feel like they do need more context.

Slide

ACE: So I will be asking for Stage 1 and some people might think, "a lot of this is very similar to records and tuples, and that’s a Stage 2 proposal, so what is going on?". Separately from this session, I put on the agenda a request to withdraw the Records & Tuples proposal, and this current agenda item is for a new proposal that I see as a reimagining of a very similar problem space. And I think it’s significant enough of a reimagining that it just makes sense and it’s easier all around to start from the start as Stage 0, see if we want to do Stage 1. With a new kind of branding, even if we end up calling things records or tuples this is the best way process-wise, not only for us in the committee but the general JavaScript ecosystem to help everyone follow what is happening.

ACE: So I don’t want to focus too much on Records & Tuples being withdrawn, I have a separate item on the agenda for that, which is currently set for tomorrow (note: it ended up happening later the same day).

Slide

ACE: This problem space I keep referring to, it’s about this situation you may find yourself in. You have got objects that represent the same data. Two positions, both representing the same coordinates. But when you put them in a Set, you still have two things in that Set. I am using Sets here because it’s easier to talk about but it’s the same with Maps. Sets and Maps work great with strings and numbers, but when you have an object it really only works if the thing you care about is the object’s identity. Not the data it represents. This is unlike other languages, where it's common to be able to override that behavior for objects.

Slide

ACE: So what do JavaScript developers do today? So there could be a library solving this, but I think what I see a lot of is, no need to reach out for a library when we have JSON.stringify. So this gives people a seemingly really quick fix of this problem. Because now, I add my two positions in the set and the set is size 1. But I now have so many other problems that I am perhaps not even aware of because I am copying how I see other code handle this and am just falling into the same trap as everyone else.

Slide

ACE: So JSON.stringify, it’s impacted by key order. You have two objects implementing the same interface but created in different areas of a codebase and have different key order so they stringify to different strings, it’s not safe. Also some values will throw, BigInt for example. Other values can be lossy, NaN becomes null, and there are other examples of things losing information when they become a string. And also, not in all cases, but it’s easy to think of lots of cases, where the string representation of something occupies a lot more memory. And also at the end of day, you have a string. So your sets and maps are now filled with strings. If you want to iterate over those and do something with it, then you want to go back the other way, turning the string back into an object.

Slide

ACE: So this is all not great. And it’s a bit of a problem. CZW actually reached out to me after seeing these slides and said that they do exactly this in the OpenTelemetry package, and this is a snippet of it—they have this whole custom HashMap but I am just showing part of that code here. It uses JSON.stringify and stores two maps so you can do the reverse mapping. And you can see here, they have taken into account one level of sorting the keys. Because they know that these objects just have one level. So I am not just making this up. This is what people do today.

Slide

ACE: So what am I proposing? So I am going to propose something that maybe looks more like a solution. And that’s maybe wrong, why am I proposing a solution when we should, at Stage 1, be focussing on a problem? The reason I am proposing something that looks like a solution is, one, we have been talking about this problem space for, like, at least four years while I’ve been in the committee and I am sure it dates further back than that. So I think the thing that is really needed here is actually, what are we doing, especially as there have been other proposals in this space, so I think it’s important that this proposal is not only saying the problems, but how it’s intending to address them. Also, because even with the things I am going to propose, there’s plenty of design space to talk about—this is by no means a complete solution. It’s just the core of the idea. The names and API can change.

Slide

ACE: So what is that idea? The idea is a new thing in the language I am calling them "composites" for now. When I put one into a Set, the Set sees that the things I am putting into the Set are composites and it switches to the new behavior, where it sees these things are equal, according to how composites are equal, which I will explain later. And now, I only have one in the Set.

Slide

ACE: So what are these composites? These are objects, not new primitives being added to the language. And parts of this proposal are driven from feedback we got from records and tuples, not only the implementation complexity, which hopefully you can see how there is lower complexity in the implementation, but also for the developer understanding of the language. Like, there was concern of introducing new primitives on both sides, the developer experience and the implementer experience. So these are not new primitives. They are objects.

Slide

ACE: And you always get back a new object from this thing. There’s no reliance on garbage collection and GC semantics to trick the sets to saying these things are equal.

Slide

ACE: And they don’t modify the object.—It isn’t like Object.freeze. The argument I am passing in is like something that is going to be—MM gave me a useful word, we can see this as it is coercing the input to a composite in a way. Or it’s taking the argument as a "template" on what the composite should contain. It’s not modifying the input to become a composite itself.

Slide

ACE: Here I show that the function throws when called with new. Maybe this bit should change. But the way I was thinking of them is they’re not classes with a prototype. Instead, they are like a factory function. Maybe this is something we should discuss. Maybe during Stage 1. But that is what I was thinking, it’s not like a class hierarchy.

Slide

ACE: The argument you’d pass has to be an object.

Slide

ACE: and the composite is frozen from birth. So you can never observe a composite in a mutable state. A composite is always frozen.

Slide

ACE: And they’re not opaque. You can see the things that a composite holds as its constituents. So I have created a composite that has 'x' set to '1'. And then if I look at the keys on that composite, then it has a key of 'x' and I can read that and get '1' back out. If you have a map or set with composites as keys, you can iterate over them and use them as data without having to do a reverse mapping.

Slide

ACE: They’re generic, and by generic, I mean they can store T, they can store any value. They’re not like records and tuples which were primitives that can only contain more primitives. So here, you can put a Date object in, and then if I read that property back out I get the original reference to that object. It’s not deeply converting everything. It’s saying, here I have a property 'd', and that stores the reference to that date. And I am also thinking, you should be able to store negative 0 and maybe that’s another thing we should discuss, maybe during Stage 1.

Slide

ACE: So yes, there’s two things. One, it’s not doing a deep conversion. Also, you can store any value in here. So that means, these things aren’t necessarily deeply immutable, but they could be if everything you put in them is deeply immutable. So they don’t give you that guarantee. But you certainly can construct deeply immutable data from them.

Slide

ACE: As there will be a way that you can check if an object is one of these special composites. If you created a proxy of one of these, it would be false. It’s not like Array.isArray where you can check the proxy's target.

Slide

ACE: So that is like what they are on their own. I guess the thing that is more exciting about them is how they are equal to each other. So the simplest possible case…

DLM: There’s a clarifying question in the queue.

JLS: Just a question there. The properties passed in, are they also frozen deeply? So if I have an object, an existing object, and it’s one of the properties I am passing in, I have an example there in the queue… in the question itself.

ACE: There’s no deep conversion. In your example, if you create a composite of a property foo that is an object it is not modified or touched in any way. The composite only contains a reference to that original object.

JLS: Okay.

ACE: So the composite itself is frozen but the things it references don't necessarily need to be.

JLS: So the equality, then, that you spoke of using a composite in a set, it’s—is that equality, a deep equality? Or…

ACE: I will come on to that.

JLS: Okay. Thank you.

Slide

ACE: So yes. A more interesting example is, these two things. Both have an X and a Y. They’re all—the key order don't matter. And there is a choice on how that is achieved it just ignores the ordering when comparing, or does it kind of try and sort these when it creates them? That kind of gets us all into a bunch of questions about symbol keys. So at the moment, I am thinking, it doesn’t sort the keys. So here I have two composites. If you ask the first one for its keys, it gives X and Y, and for the second one, it gives you Y then X. But when you’re comparing them, that wouldn’t matter. There’s an issue on the proposal of if we want to do something different. But in general, the goal, however we achieve it, is that you shouldn’t have to worry about key order. That’s one of the problems we are trying to solve.

Slide

ACE: So the equality is symmetric. Checking if A equals B is no different from asking if B equals A. It doesn’t matter if one is a subset of the other. These aren’t equal because one has extra keys from the other.

ACE: So to JLS's question, "is it deep?". It is deep while the kind of backbone that it’s following is still a composite. So as it’s walking, every time it sees a composite, it keeps using recursion to check if they are equal. If you have two big trees made of composites then it’s doing deep comparison. But as soon as you have something that is a regular object then you are back to pointer equality.

Slide

ACE: So here, these are not equal. Because the composites are referencing two different objects.

Slide

ACE: Whereas this is equal because they’re both referring to the same object.

Slide

ACE: So what does that look like in pseudo-JavaScript code? Composite.equals starts with this base case of, SameValueZero. Though, again maybe this is something we should discuss in Stage 1. Maybe it shouldn’t be SameValueZero. The alternative here is we have SameValue. One of those two operations.

Slide

ACE: Then if either one of the arguments isn’t a composite, then it’s not going to be equal. Otherwise, both arguments are composite, so let’s compare them using this secondary 'equalComposites' function.

ACE: So we first get the keys of one. Compare to the keys of the other. They have to have the same set of keys. Otherwise, we return false.

ACE: And then we loop through the keys and recurse back to the beginning - are the values of the two keys equal?

ACE: The main thing I want to show here is that when you are comparing composites, you have lots of opportunities to return false early. The kind of the worst case comparison is when the two things are equal, that is when you have to get all the way to the end to be confident of that. Unless you’re literally comparing a composites to its literal pointer self, in which case, that would be an immediate return true.

Slide

ACE: So the really good things about this equality is, all of these things. So it’s guaranteed to have no side effects. These things can’t be proxies. They don’t have any traps, asking for the keys and reading those values is always safe. The words I was looking for earlier, symmetric, reflexive. All of the things required to be well-behaved map and set keys.

Slide

ACE: So where would this equality appear? It definitely appears if you did Composite.equals. And then the real key part of this idea is that it kind of works out of the box for Maps and Sets. And then also, the other places that currently use SameValueZero, which would be Array.propose.includes and then it feels wrong if we do Array.prototype.includes, you also want indexOf and lastIndexOf. So we wouldn’t be changing those for existing values, they would still be strict equality unless the thing you are passing as the argument is a composite.

ACE: So there’s no, like, web compatibility breaking changed all of these things. The semantics are identical to current semantics. It’s when the argument is a composite that it uses the new semantics. I guess that asterisk applies to all of them. Mainly, I am trying to say, for indexOf definitely we not changing from strict equality when the arguments are anything else. So NaNs are still not in arrays according to indexOf, but a composite containing NaN would be.

Slide

ACE: So they might also appear in future bits of spec which don’t exist yet, like MF’s proposal Iterator.prototype.uniqueBy, you can imagine this is the—here when you pass in the callback to say how things should be worked out if they are equal, then here, it can return a composite. So under the hood it is using a set-like thing to then filter out the duplicate values from the iterator. So there’s opportunity for this to appear in more places in the future.

Slide

ACE: So equality is linear. But in some negative cases it will be faster. Internally the way people would need to implement these, and the way the example polyfill implements things, there is hashing under the hood. But it doesn’t expose that hash value in any way. When you are putting these things in a map and a set, it wouldn’t literally be scanning every composite and doing a fully linear scan. It would be doing, like, an initial hash lookup first, and then only needing to compare when there is a hash collision, when the composites are equal for example. And because these things are immutable from birth, there’s no way to create cycles in this equality. So you can kind of traverse them safely without needing to keep track of where you have already been.

Slide

ACE: So I have a bunch of bonus slides. But I will use them if the topic comes up. I would like to go to the queue. So MM is up first.

MM: Yeah. So the fact that isComposite as well as Composite.equals is operating on the argument instead of this, and that compositeness is not pure proxies, that looks like a dangerous precedent if that’s allowed to go by without examination. So I want to take a moment and explain why that’s okay in this case and it’s not—it’s not systematic of a more general principle that leads to a wider precedent.

MM: The reason it’s okay in this case is that—first of all, the reason why it might not be okay is because of ruining practical membrane transparency. Indeed, I imagine this does actually ruin practical membrane transparency for existing membrane code—membranes built out of proxies—for existing membrane code based with composites where the membrane code did not know about the possibility of composites.

MM: The reason why going forward, this is repairable by the authors of the membrane code, is that a viable way to restore practical transparency is due to the very passive nature of composites. Composites, none of these operations on them trigger user code. These are purely the contents of the composites is a simple, you know, simple object with frozen data only properties. And, therefore, a proxy participating in a membrane when faced with a real target, that is a composite, can simply produce another composite, not a proxy on a composite, on the other side of the membrane and that proxy on the composite has to go through all of the same issues as creating a shadow target. Insofar as ed(?), if the original composite referred to X, then the composite on the other side of the membrane has to generally refer to a proxy for X and vice versa. But that will—because there’s no user code involved, that will restore practical membrane transparency. I also want to maintain—just remind everyone that there is the big issue that doesn’t—I see somebody mentioned that. Okay. That’s all I had to say.

ACE: Yeah. Thanks. I agree. We shouldn’t naively see this as precedent for change on a general design constraint. And this is an exception, not a new rule.

MM: Good.

LCA: Hey. Your slide on storing a data object in a composite, many times those are objects that you want to compare by value, not by identity. So I was just wondering whether you had put any thought into how that could work? Like, do you think the approach here is that users should like to turn these values—like, these Date objects into strings before putting them into composites? Or do you imagine a toComposite method on these objects would give you something to put in a composite or anything else?

ACE: Yeah. So if—yeah. If we were starting Temporal in two years’ time and we already had Composites I think it would be really nice if Temporal objects were composite objects. Unfortunately just the way that the language is designed in a particular order means, yes, they wouldn’t be.

ACE: And I think at least for Temporal, the good thing there is that—in my understanding, please correct me if I am wrong—all Temporal values do have a canonical lossless string representation. Especially now that we don’t have custom calendars. Yes, if you want to create a Composite that has a start date and an end date, then to get the equality that you probably want to turn them into a string in that case. Or construct kind of your own—a different type of composite specifically for Temporal types where the constituents of the composite are the parts of temporal type so it's not flattened to a string. But yeah. Because we can’t make all Temporal things composites, it’s my understanding, then I think this doesn’t just work out of the box unfortunately.

PFC: I agree, it won’t work out of the box. But there are probably ways to accommodate this use case with special cases in the composite factory function. It would be web compatible to have special cases there, because nothing has ever been passed to the composite function on the web before. You could, for example, say, if you pass a Temporal object to Composite, it will ignore expando properties and read the internal slots. I haven’t thought through the implications of the idea, but it's an example of something we could think of in the realm of special cases to make these use cases work. I do see that people want to use Temporal objects as hash keys.

ACE: Yeah. I think this problem already exists. Like, it’s already the case that if I have a Temporal.PlainMonthDay, I can’t use that as a temporal specific domain map key. So Composites don’t introduce the issue, but perhaps only compound it in that, now if I have two of them, I also can’t compose them together, because even on their own their equality in a Map is that of object identity.

ACE: I would like to move on to WH.

WH: I just wanted to double-check that, no matter what you pass to Composite.equals, it will not run user code?

ACE: Correct. There’s nothing about it—assuming a well-behaved implementation, isn’t going to do something, the spec says it shouldn’t do, then yes, there’s no user code. You can check if something is a composite, then it would only read the properties and interact with the object if the object is a composite. If the thing is a composite, then none of the methods used during equality checks can trigger user code.

WH: Thank you.

SYG: So we chatted about this with the V8 folks. The biggest piece of feedback we had was an alternative designs which canonicalizes in the factory function. In that canonicalization, to de-duplicate in the constructor function using the semantics or equals you have laid out and returned the canonical object that is the duplicated and can that, the performance is different. You have a different bottleneck, where the canonicalization is slow. Because you—for the same reason equals is linear in the worst case, in the worst case here you would have to check against this table. And because this is kind of canonical with respect to everything that you might create, the domain you are comparing against is possibly larger. On the other hand, you get other very nice benefits like, you can continue to just use === everywhere because it’s a canonical copy of the object, and as an object, it’s just pointer identity. Nothing else needs to change. The comparisons compare fast.

SYG: This tradeoff makes sense if indeed it’s four keys, it stands to reason you are creating fewer keys then you can check. For equality. So what are your thoughts on that alternative design instead of the current one?

ACE: Yeah. Guess one thing about that design whenever we have discussed that in the past is that one of the constituents has to be an object. You can’t create a pair of numbers because if you return the canonical representation of that its lifetime is infinite. If you try and put that object in a WeakMap, WeakRef, or in a FinalizationRegistry? It has to live forever because the canonicalization of two numbers is—is—it has no expiry. I wouldn't want to say you can’t create a pair of two numbers. That doesn't feel great. It also sounds like it moves all of the work to the object creation, which was one of the concerns with records and tuples. Yes, the comparison is now cheaper. If you are creating lots of these, you have to, like, eagerly kind of do all the work up front. Whereas maybe you are just checking Composite.equals and if the very first two keys are different, then you don't need to traverse the whole object and canonicalize them. You can see immediately they are not equal and stop working.

ACE: I had assumed that this was off the table because of the discussions around Records & Tuples because it has a lot of the same implementation complexity, minus introducing a new typeof. But if that’s on the table, certainly up for discussing it.

SYG: I will respond to some of the points. The—remind me of the first response you said

ACE: …having an object constituent

SYG: The WeakRef thing is true. My response to that is, I think it would make sense in the canonical—in the eager canonicalization alternative that it would not be usable as a weak target for the same confusion as Symbol.for. Even with composite key as as presented, that potential for surprise already exists. If people are using composite keys as, you know, a pair of two numbers, it may surprise somebody that that—to use that composite as a key in a WeakMap, it may surprise the user that that entry may be collected from under them.

SYG: Right? If the mental model is just a composite key of two numbers, I think my intuition there is that whether we do canonicalization or you do the current proposal, there is potential for confusion. I am not sure how successful we can communicate, it’s an object that looks like any other object. That goes into a small point I see somewhere in the queue about having the—the new keyword if I am leaning to that. The—the difference between this proposal feedback and canonicalization was bad for Records & Tuples on the V8 embedder side, if you have a different’ type, it’s not pay as you go… if it’s a canonicalized object it’s complex [plit] pay as you go. It’s just an object. And I want to dig in through some time later, on the queue if we have time.

SYG: And the other issue that—that gave us was the use cases, where a lot broader than just composite keys. And when I chatted with the people about composite keys as a use case, it seemed to be relatively about fewer objects, shallow object graphs which are very different performance-wise from many kinds of objects. Arbitrarily complex object graphs. I don’t think people are keying things on arbitrarily complex object graphs. So if that is an assumption, a use case we’re designing for, it seems less—a lot less problematic to bottleneck everything, all the expensive work in the constructor. Now, if you think it is still worthwhile solving for the many objects, arbitrarily complex object graph cases, then I have my doubts this is the composite key proposal, but there’s a longer conversation we can pick up later.

DE: So I—I guess I have two questions for SYG. One question is: do you see this proposal as pay-as-you-go? Because it’s only hit in kind of this extra branch to make a comparison. Or is that extra branch considered more expensive? And also, wondering, you know, how confident are you that people won’t want this to be cheap, the allocation of this? Is there a hope derived from the use case?

SYG: We don’t use the first case as pay as you go because of the extra branch—as a combination of the extra branch across multiple data structures. And this becomes a thing that would be common in all data structures we design, that check equality. We need a protector here or something like that. For the second question—

DE: If case you never use the feature?

SYG: You never—

DE: Yes.

SYG: Yeah. The—yeah. Okay.

SYG: So the second question, yes. It is—I have no idea. I don’t want to say I am confident or not confident. I have no idea. If we believe are people are reaching for this via composite key, the less concern about the key creation being cheap and the lookups being cheap

ACE: Part of me doesn’t want to think of these things as only composite keys. That certainly is the primary reason for adding this to the language. But what I wouldn’t love happening is if, you have to—you do completely separate your data from the composite key, because then otherwise, what ends up happening; every object has, like, a 'getComposite' or 'toComposite', then which is annoying if the thing is a person, and the person has an inner company field and the company is a composite. So it gets deep. It’s easier in that case to use the composites as your data. So you don’t have to keep kind of converting things to composites, when you do want to put them in a map key. So I do want the proposal to focus on the use case of keys for maps. And would like them to have the potential of how the language—where the language could go over the next, you know, 10, 20 years. Maybe these things do become something that actually forms more of the way you actually model the application. But I could see the startup application development going there, but would then necessitate the creation being cheap because you wouldn’t necessarily pay off the cost of actually doing the comparison. So that’s where I am thinking about it. Yeah. I really—I wasn’t expecting your comment. I will have to have time to think about it.

MM [on queue]: I am uncomfortable with canonicalization .

KG: I support Stage 1. This gives me everything I wanted from Records & Tuples. This is the case I had for Records & Tuples. I am worried that this is not the use case for, like, half of the people who wanted Records & Tuples. There’s a lot of people wanting Records & Tuples for reasons I didn’t fully understand, wanting immutable objects, but not liking Object.freeze or something like that. And I confess that I just never really figured out what people were excited about there. I am hesitant to completely dismiss all that. But like I said, it gives me everything I personally want. So I am supportive of this.

ACE: Thanks, KG.

CZW: Yeah. I want to echo that this also matches the OpenTelemetry use case, it does not only provide equality comparison, but also allows iterating the keys inside a map without keeping a reverse map to the original key object. So this is really helpful to ask as well. And support this for Stage 1.

SYG: We touched on this a little bit. But I want to reiterate here what—what KG was saying, given so much of what we heard from Records & Tuples was kind of the different camps between people who are really, really excited about immutable data function, but people who are not clear on the performance implications here and that’s where a lot of the implementers were pushing back. I would be very uncomfortable if this API were designed to be flexible enough that people just use it for immutable functional data structures and then end up finding it’s not a good fit. So thoughts on kind of taking the use case lane that this API says it’s going for and then sticking to it.

ACE: Okay. Yeah

SYG: Your thoughts on that. You said both—

ACE: Yeah. Like, the core thing I want is this kind of new capability. So I think the initial API can make that very clear, that when you are constructing these things, it’s to create a composite key and we can make sure—for example, this proposal, I am not during Stage 1, going to sneak in, let’s add some syntax for this, because when you have nice syntax, we say as a committee, use these as your immutable data. While the API is more verbose, maybe we should make it Composite.create to make it more verbose. It’s not going to become the default data structure. But I think if we want to explore adding more immutable data structures to the language in future, they seem like the kind of the perfect base for that. I want the door to be open. It would feel wrong, if in the future we added immutable data structures, that they wouldn’t be composites. That feels like we have—as per my previous presentations that composite keys need to be immutable. So if you then also add immutable, it might as well work as keys. So I hear what you are saying. And it’s a difficult part of the language design, and small changes in this could impact where they are used. Also, it impacts how we might want to focus on how they work on a performance perspective. I want to discuss this more with you. But maybe not when we only have 10 minutes left. I definitely hear what you are saying.

DE: The reason you are concerned about this being a poor fit, SYG, is pursuing the canonicalization alternative and making sure that is workable? Or other particular concerns you have if this gets overused?

SYG: I think the canonicalization thing is a possible solution to the deeper performance tradeoff concerns. I can see just very different implementation strategies for dealing with the shallow and few—like if you believe composites are shallow and composites are few, that’s very different than if you expect most of them are deep and have many composite pieces.

SYG: I am not convinced that—

DE: That’s the pattern I would expect

SYG: I would then just not use the other one composite for the other cases, is my preference.

DE: Right. But this—we are talking about the application code. Not your code. I am not sure what we could use to prevent this. Do you think we should be making this not transparent, like you can’t access what’s in it?

SYG: It’s like one such strategy, where it’s like—the cost is upfront during creation. Therefore, it favors one kind of pattern. We want that pattern to be fast and to be the happy path. That is—

DE: My question was, are there other things besides canonicalization that come to mind? For you.

SYG: Not at this point. No.

DE: Okay.

ACE: Yeah. I see there’s a reply. One thing I was imagining engines would possibly do was calculate the hash value of these things. So the equality in the cases where you don’t have a hash collision is still not as cheap obviously as with pointer equality. But still, it’s to reduce the number of times you are falling into that kind of deeper comparison case. Yeah. JSC has a reply.

JSC: It’s a reply to KG’s topic which popped off the queue before I could finish my reply. Does anyone else have replies to SYG’s? If not, I want to say to KG, who was wondering about the people wanting Records & Tuples for immutable data structures but not finding Object.freeze acceptable. I was one of the people who was eagerly waiting for Records & Tuples. I am a huge fan of efficient persistent data structures, persistent immutable data structures like you see in Scala, Clojure, Immutable.js which was inspired by them, all which can quickly create new versions of data structures with fast deep changes to inner keys without any copying. Object.freeze would not address this, because creating changed versions requires deep copying. However, I accept that the engines say that adding immutable persistent data structures to the core language is not practical. So I’m also fine with being able to just use composite keys in Maps and Sets. That’s my view, as someone who was eagerly watching the old proposal for immutable data structures. Thanks.

MF: You asked about key sorting during the presentation. I want to give some feedback on that. I think key sorting is important if we were—if Composite.equals wasn’t doing its own equality comparison. Oh no, you can tell the difference between these even though they are equal. But because it’s Composite.equals and not Object.is , I don’t think key sorting is important. It’s not important the keys also sort the same.

ACE: Yeah. I agree.

MF: Yeah. That’s my opinion on that.

MF: The next one on the, like, base case there of Composite.equals, you had it in the slides, SameValueZero, I am of the opinion that SameValue would be better here. The comparison that we have in maps and sets today, well, you can’t actually tell. They do a normalization of -0 to 0. When you put in -0, you get a 0. It doesn’t matter whether they do a SameValue or SameValueZero. But it would matter with composites because when you put in a -0, you observe it, you get a -0. Which means you should use SameValue to compare them. This would also make it way easier to—if you have a map that’s already doing, like, SameValueZero, it’s very hard to get a SameValue map. I have a library that does that. It would be—I don’t know if it would be significantly harder or just impossible to do that if composites were also doing SameValueZero. I would strongly support SameValue here

ACE: Thank you. A decent part of Stage 1 would be unfortunately talking about -0. I thought that -0 was a thing of my past. I can see it’s a thing of my future too. MM has a reply?

MM: Yeah. The sorting of keys is a good open and shut case about why canonicalization is impossible, if we admit anonymous symbols as property names. There is no possible canonical sorting of anonymous symbols, and if you canonicalize, then if you simply go with whichever one became canonical first, then that’s history-dependent and opens a global communications channel. So I think you can’t have them

ACE: I was imagining them to have symbol keys. If we do that, sorting is off the table.

MM: Therefore, canonicalization is off the table?

ACE: Yes, if we want them to have non-registered symbol keys.

DLM: So there’s a point of order. We have 8 minutes left. We have heard pretty positive things so far. I don’t think we have heard anything that would block asking for Stage 1. So it might be a good idea to do that shortly

ACE: Yeah. Yes. It’s a great suggestion. If we could do that now. And then if we have any time left, we can pick a favorite topic from the queue.

ACE: So I am asking Stage 1 for this new composites proposal.

WH: I support this.

ACE: Thanks, WH.

[In the queue] Explicit support from JH. CDA, SpiderMonkey team/DLM, MF, CZW, NRO, MM, SYG.

ACE: Any objections?

[silence]

ACE: Thanks. That’s really great! I’ve been mulling over this for so time. I am really pleased. We have 3 minutes left. So we can still keep chatting a little bit more. But I am shaking with excitement right now!

EAO: Could you speak briefly about why Composite is not a class?

ACE: So it was only because—no. It was initially because I was imagining—as I have gradually evolved my mind, from the Records & Tuples proposal, over and around and to this, I was still thinking of records and tuples. And I was thinking that there would just be this one factory, and it would kind of switch its behavior based on what you passed in. If you passed in, like, a plain object, you would get back something like a record. If you passed in an array, you would get back something like a tuple. This wouldn’t make sense if you are doing the composite and its prototype changed. But a bunch of conversations since, about kind of this whole space, and should there be kind of tuple-like composites for when you do literally just have a list of things? And giving the names doesn’t make sense. Or and you want to prototype, so you can have a map and things. Does make sense to me more about whether it should actually be new Composite, and you could then—this loops back to SYG’s point. If you can do a new composite, that means you can do, like, class Position extends Composite. And, you know, and like reflect, construct, NewTarget there is now my Position.prototype. And some people think that’s cool. Some people also think that’s not going to be good. And I think this is going to be one of the main things to talk about. One, - 0. And two, how we should drive this API to encourage the behaviors we want as a committee in the way these are used. Like, should it be really thought about as this particular case? Or should they be used as a general data model? And new Composite should be a part of that conversation.

MAH: On the previous topic I thought there was a comment from WH on the queue and I would like to hear it. I don’t know why he disappeared in the shuffling. Or maybe it was removed?

DLM: No, that was my fault, sorry.

WH: Composite.equals is a replacement for SameValueZero. If we switched the semantics to SameValue, it would break Map and Set semantics.

MAH: I think the idea here would be to use the same value to compare the composite themselves, maybe not for top level. If there is a concern that if there is a concern that plain -0 should still be SameValueZero to 0. Um, we can keep that but inside once you put that inside composites you don’t need to keep that rule.

WH: That would confuse users. We talked about this extensively in Records & Tuples.

ACE: I felt this topic was behind us because of the 350 comment thread on Records & Tuples, but I think I will end up doing a slide deck on this particular topic. Because it sounds like there is a variety of opinions amongst the committee.

MM: So the reason why I think it needs to be called Composite, not new Composite is that if I saw new Composite, I would expect it to give me something fresh even if the input was already a composite. But if it is a constructor and—the expectation that it acts as a coercer and if you feed it the kind of thing that it produces and it will return to the thing that directly. Without creating a fresh wrapper.

ACE: Yeah we maybe lose the ability to do that and kind of optimization.

DLM: Okay, I think we will stop this conversation there and we are almost out of time, congratulations ACE

Speaker's Summary of Key Points

  • The problem of working with composite data in Maps and Sets was presented
  • A proposal for adding a special object type that is compared structurally when used in Maps, Sets and some other APIs.
  • There was discussion on if this helps with existing types such as Temporal, which the initial proposal does not
  • There was discussion on an alternative design which eagerly interns the objects, instead of introducing new logic to existing equality APIs
  • There was discussion on SameValueZero vs SameValue

Conclusion

  • Consensus for stage 1 was achieved
  • Discussion about canonicalization and handling of negative zero will continue as part of stage 1

Immutable ArrayBuffer for Stage 3

Presenter: Mark Miller (MM), Petter Hoddie (PHE)

MM: I would like to ask everyone’s permission to do my normal thing and record the presentation, including questions asked during the presentation. And then I will turn the recording off when we get into the explicit Q&A section.

USA: Let’s wait maybe a few seconds. Seems like nobody has objected.

MM: Great. Thank you.

MM: In the last several meetings we have gotten a little bit of efforts and succeeded quickly through Stage 1, Stage 2, and Stage 2.7 and today, I would like to ask the committee for Stage 3.

MM: I think it is a simple enough proposal that I don’t need to recap, but if anybody wants to ask questions about the content of the proposal or clarification or whatever, please do. Because people can understand where they are. And this was the checklist for Stage 3 based on the stage 3 criteria. And we’ve written test262 tests and submitted them as a PR, but we have not yet gotten reviews on that. And therefore, of course we have not yet merged it. And implemented feedback—we would like implementor feedback from the high-speed engines, and XS engine has done. Given implementation 262 tests and all the feedback from XS is the proposal and all of their feedback is good and we have not yet received feedback from other implementations.

MM: So there are two things listed as normative issues and document the permanent bidirectional stability of Immutable ArrayBuffer content, immutable meaning it is not just read-only, but it's a bilateral guarantee that not only can we not mutate it but what you are seeing will be permanently stable.

MM: So RGN added this text after the feedback from last time and the document and the permanent bi-directional stability, and the remaining thing which we have not checked yet, is we purposely did not declare the order of operations were resolved, because we wanted to find out whether there were any implementation concerns. For example, sometimes a difference in order of operations which is observable but does not matter at all to the JavaScript programmer would provide suggest implementation and we have not received any such feedback by the explanation of purpose of the stages and we don’t have to get that feedback before 2.7, if we are willing to accept this as the normative spec until we get feedback to the contrary. And the spec that we have adopted was a reaction to the previous feedback that we got, which was that sliceToImmutable, which is the only case where this arises, should just be literally as close as possible to .slice(), so we added this exception over here to keep these things as close as possible including both the order of operations and whether they throw or do nothing.So basically this is all the engines implements slice right now and I have not heard any complaints about slice doing anything inefficiently and so I would like the committee to approve that this is the one spec in Stage 3, still of course subject to Stage 3 implementer feedback.

MM: This is the PR against test and written by PHE, co-champion and part of Moddable XS. So, this is the actual formal explanation of Stage 3, the standard for carrying the purpose. So any questions, and may I have Stage 3?

MM: Now I will stop recording.

NRO: So, I would prefer that we wait —

MM: Can you repeat that?

NRO: I would prefer if before moving to Stage 3, we would wait for tests to be merged. And the reason I am saying this is that there are two proposals in Stage 3 and they have tests pending to be merged. There’s the decorators and the using declaration proposal, and in both cases implementers are confident about the coverage because this is Stage 3. In both cases there were different bugs that would have been code in the test and they not test. It does not need to happen during plenary but if you [INDISCERNIBLE]. It will be done automatically when it is merged and I would prefer to wait and I am comfortable asking this because a few weeks ago, there has not been much material for such reviewing.

MM: So can I ask people involved in test262, and those that committer status in test262, please do review it and then I am eager for your feedback. And is there anybody who thinks they might actually get to do that before this plenary is over?

MM: Okay, so once we proceed on the test262 tests to the point where they’re merged, we’d come back and ask for Stage 3. And is the objection a valid objection and that is why I brought it up in the presentation.

USA: There is a question about process by JLS?

JLS: Yeah the question is straightforward and can we advise to advance automatically once it emerges? If we have consensus and the tests are the only reason to withhold, that advancement could be automatic?

USA: I can help answer, we can get conditional consensus on Stage 3 and that the position advance to Stage 3 once the tests are merged.

MM: I would certainly like to have that conditional approval, sure.

SYG: The significant testing in Stage 3 should be merged whether it is merged on the trunk or staging, it needs to be executable. And as for conditional Stage 3 on merging the tests, I guess that’s okay if this is the only thing. In general I would like it to not be—sorry, in general I would like to minimize the number of conditional advances because that just increases the likelihood of things falling through the cracks and so we can come back since it is not in a particular hurry.

MM: I am happy to come back as well. I don’t think postponing it for one meeting will materially affect anything. SYG, let me and you about has there been any exploratory implementation work on this proposal at Google?

SYG: No, and we will not look at it until it reaches Stage 3.

MM: I see. That is the reason why it would be nice to get to Stage 3 earlier than next meeting, but not a big deal.

SYG: Without the tests and even if it is conditional stage 3, we need to see if it works.

MS: All right, we have a reply by MLS?

MLS: JavaScriptCore won't start looking at it until Stage 3 as well.

DE: How complete are the tests that are out for review? I think it’s important that we have some tests merged, but are they complete enough for Stage 3?

MM: I cannot speak to that myself. I certainly recognize the importance of the question and I just don’t know.

RGN: I can speak to it. I am a test262 maintainer but not putting formal approval on the test, and I reviewed them. And I think they are complete enough for Stage 3. Follow ups are expected for addressing the cases in order of operations with respect to error handling, but that is common even in mature tests and there is coverage for transfer to needable but not yet for sliceToImmutable but that will be largely analogous. We could push for inclusion of this for this pull request, but for this or not it will come inside of Stage 3.

DE: I don’t have an opinion whether or not those are on this pull request or in a separate pull request, but before getting to Stage 3, we should complete all of those follow up items and not have any known gaps. Just because we have existing coverage gaps overall does not mean that proposals can reach Stage 3 with known coverage gaps.

RGN: In this case I think it is because those are the very things that are expected for implementer feedback is requested.

DE: Right, so the test will be helpful to get that feedback.

MM: So, I think all of these lines are pointing to bringing it back for Stage 3 next meeting. And which I am happy to do. And what RGN says does raise an issue of the committee feedback. RGN is both a test262 committer and a co-champion of the proposal, and he did not write the tests but reviewed them and PHE wrote them. Is there any problem with RGN reviewing this as an official test262 committee person despite the fact that he is a champion? I don’t know—

DE: As a non-maintainer of test262, I think that’s fine, for anyone prepared to do an intellectually honest job. RGN just volunteered points for further work, which is a great demonstration of that honesty.

MM: Good, awesome.

DE: Unless anyone else has opinions here?

PFC: In addition to the point I wanted to make, I will also answer the immediate question which is: I think it's fine for RGN to review that. Having the specific test262 reviewer not be a champion is not a standard that we have required for anything else.

PFC: I wanted to take the opportunity to make a point about how to facilitate test262 reviews, not just for this proposal specifically but in general. We have some documentation about testing plans that my colleague IOA wrote which we will merge soon hopefully. I recommend to all proposal champions, before you write the tests, open an issue with a testing plan, because that will help us as reviewers to get a sense of how complete the coverage is without having to dive into every corner of every proposal, because that is the thing that that really takes the most time when we are reviewing. And also once you have a testing plan with a checklist, that will make it easier to open multiple smaller pull requests than one large one, and that helps us because currently we have a lot of maintainers that have limited time for reviews. So if the choice is between reviewing three small PRs or 20% of a large one, I think people will naturally want to review the smaller pull requests. Having them be small and marking them as done in the testing plan as they get merged, helps us get around to things faster and merge them faster.

NRO: I think RGN should be allowed to review the request, and it is better that champions of proposal review the test and having just an approval from RGN is better than just having an approval from PFC because RGN has more context on the proposal.

DMM: +1 on more smaller PRs for the tests rather than one giant one. The current PR is almost 1K lines and we will miss stuff in trying to review that.

MM: Okay I will communicate to PHE offline.

USA: That was the queue, MM. Would you like to ask for consensus?

MM: There is no consensus to and for, and I think what we settled on is I will come back next meeting and ask for Stage 3 assuming that we got the test262 situation merged. And I will suggest to PHE that the test will be divided into smaller PR’s.

Speaker's Summary of Key Points

  • Immutable ArrayBuffer was presented for Stage 3. test262 tests have been written and submitted as a PR, but reviews and feedback about the tests are still pending, so Stage 3 was deferred.
  • There was a discussion about facilitating the test262 review process by opening an issue with testing plans first and then submitting smaller pull requests to make reviews more manageable.

Conclusion

The proposal will be brought for Stage 3 at a future meeting, once tests are landed and known coverage gaps are filled. For now, it remains at Stage 2.7.

Upsert for Stage 2.7

Presenter: Daniel Minor (DLM)

DLM: So it has been a little bit since I have talked about upsert and this is presenting it for Stage 2.7

DLM: … using the map and you want to do an update but you are not sure if there is already a value associated with your key or not and what people do today is roughly along the lines of snip it example code and see if the map has the key and if it is there, you are going to do one thing, and if it is not there, then you are going to silence.

DLM: The proposed solution is to add two methods to Map and WeakMap, one is getOrInsert which will take a key and value, and search for the key and the map, and if it is found it will find key associated and otherwise it will insert value in the map and return that. And there is a computed variant and this one takes a callback function. And will insert the—we discussed this last time and where we decided that we cannot prevent callback from modifying to map but we will insert the value and make the modifications that it made.

DLM: Last time this was one outstanding issue which was called issue #60 and there was barely a discussion about that, and it is getOrSet, and there is getOrInsert and that you will do once and get or set can be done multiple types. So we resolved that issue with the idea of continue to use getOrInsert or getOrInsertComputed.

DLM: And that was the last issue. So I would like for consensus for Stage 2.7

MF [on queue]: +1 support for 2.7

DMM [on queue]: +1 support for 2.7. EOM.

USA: Anyone oppose?

USA: Congratulations, you have Stage 2.7.

DLM: Thank you very much. And I would like to thank everyone that helped out, especially my Stage 2.7 reviewers.

USA: You are pretty swift. That was less than five minutes.

Speaker's Summary of Key Points

The last remaining item prior to Stage 2.7 was the issue about what to call the two new methods on Map and WeakMap, which has been resolved to use getOrInsert and getOrInsertComputed. Consensus was asked for and reached for Stage 2.7.

Conclusion

The upsert proposal advanced to Stage 2.7.

Withdrawing Records & Tuples

Presenter: Ashley Claymore (ACE)

ACE: All right, so, as I mentioned earlier, the Records & Tuples proposal has this new reimagining, 'Composites' which is now Stage 1. So while composites are looking at a similar design space, the real core of the proposal was adding these new primitives, which is fundamentally different. And this slide is a nice quick little montage of the previous times we spent talking about Records & Tuples, and there were a lot of other talks outside of plenary too. But it is clear at least to everyone in plenary—and there is a decent amount of the community outside in the ecosystem too that Records & Tuple is not going to be progressing any time soon. Adding these new primitives did not find a way to move forwards. And we have composites as a new way of looking at this problem space. So, I am proposing that we withdraw the Records & Tuples proposal.

NRO [on queue]: RIP R&T, you'll be missed. I support withdrawing. EOM

MM: Just taking the opportunity, I think composite is ready for Stage 2. The question is if anything outstanding?

JDH: There is no spec.

MM: Oh.

JDH: We cannot go for Stage 2 today even if they asked for it.

MM: Oh I did not notice that, okay thank you.

USA: Okay, I suppose that is not on the table then?

EAO: I like the composite approach. I don’t really like “composite” as a name for it. It does not really feel like it means anything, and in my head it is hard to to remember if it’s “composite” or “composable” or something similar like that. And since we have cleared out the “record” space, that could be one direction in which to go here. But I would like to note that there is also another direction available here, of leaning into the use case as presented for composite keys, that could be better than “composite”, and to use “key” as the term here. The decision here ought to clarify if we are going in more as “this is the thing you use as a key” or “this is the thing you use as a generic immutable thing”. And being in this middle ground and using a weird word for it I think is awkward and we need to pick one of these directions. And the primary way of doing this is by bikeshedding on what the name of this thing is.

ACE: Yes definitely, I can't remember exactly where the name composite came from. I think it reemerged when we were in Seattle. Maybe from DE. The proposal can end up with a different name, and only name I think we should not do is “Record" because I think it just has too much precedent in TypeScript and we cannot ignore that fact about the ecosystem. So I would not be keen on the word "Record” but I would be keen to chat about other names. And just because it is called "proposal composites", the API name does not need to be composite.

EAO: Do you have any initial thoughts on “key” as the name here?

ACE: I think 'key' iis part of that conversation of which way we want to push this. Do we push it where it firmly sits where you use these things as keys, or use these things as part of your data model? If we really want to push people in the key direction, yes then, calling them keys would be the way to do that. But first we need to decide which direction we want to push it in.

EAO: Haven’t we kind of done that by agreeing to the use cases and needs that you have presented here, which are quite explicitly about doing this as a composite key, sort of a thing? And maybe this goes a little meta, but if we go for something way more generic like records and tuples, that should be something we explicitly agree on because that would be effectively changing the use cases that the proposal is aiming for.

ACE: Yeah, as I said the thing is still something we should discuss. I think that most of the value we get at the beginning is a proposal that is focused on the composite key case. But I really want us to think long-term as well. So that the committee that sits around the future version of a call aren’t annoyed at the decisions that we make now in doing things. And I want to really make sure that we are thinking about some what-if scenarios. Of course we cannot predict the future unfortunately and we can’t over invest in coming up with something perfect that can fit all possible futures, it's not possible. But I want us to take a moment to pause and think a little bit about it, and not get too focused on just this one case and then end up missing something that we end up regretting. I think there is a conversation to be had there. And the thing I said earlier, I think it would be a shame if people have to keep converting their data into these things, and one way of avoiding that is if you can use these things more generically. But I can see why some people think that we should not go in that direction.

USA: So yeah, congratulations, so to say, to ACE on consensus, however bittersweet this might be, and we look forward to composites.

Speaker's Summary of Key Points

Following on from the Composites proposal achieving stage one and the Records and Tuples proposal not managing to gain further consensus for adding new primitives it was proposed that the Records and Tuples proposal be withdrawn.

Conclusion

The Records and Tuples proposal has been withdrawn