111th TC39 Meeting
January 16, 2026 · View on GitHub
Day One—18 November 2025
Attendees:
| Name | Abbreviation | Organization |
|---|---|---|
| Waldemar Horwat | WH | Invited Expert |
| Richard Gibson | RGN | Agoric |
| Josh Goldberg | JKG | Invited Expert |
| Ruben Bridgewater | RBR | Invited Expert |
| Dmitry Makhnev | DJM | JetBrains |
| Frank Yung-Fong Tang | FYT | |
| Anthony Fu | AFU | Vercel |
| James M Snell | JSL | Cloudflare |
| Daniel Minor | DLM | Mozilla |
| Jonathan Kuperman | JKP | Bloomberg |
| Christian Ulbrich | CHU | Zalari |
| Marina Aisa | AIS | Apple |
| Martin Alvarez | MAE | Huawei |
| Ashley Claymore | ACE | Bloomberg |
| Andreu Botella | ABO | Igalia |
| Devin Rousso | DRO | Invited Expert |
| Jake Archibald | JAD | Mozilla |
| Lea Verou | LVU | OpenJS |
| Mathieu Hofman | MAH | Agoric |
| Caio Lima | CLA | Igalia |
| Yusuke Suzuki | YSZ | Apple |
| Chip Morningstar | CM | Consensys |
| Aki Rose Braun | AKI | Ecma International |
| Philip Chimento | PFC | Igalia |
| Eemeli Aro | EAO | Mozilla |
| Mikhail Barash | MBH | Univ. of Bergen |
| Keith Miller | KM | Apple |
| Ross Kirsling | RKG | Sony |
| Chris de Almeida | CDA | IBM |
| Nicolò Ribaudo | NRO | Igalia |
| Ron Buckton | RBN | F5 |
| Shane F Carr | SFC | |
| Istvan Sebestyen | IS | Ecma |
| Daniel Rosenwasser | DRR | Microsoft |
| Stephen Hicks | SHS | |
| Ujjwal Sharma | USA | Igalia |
| Guy Bedford | GB | Cloudflare |
| Chengzhong Wu | CZW | Bloomberg |
| Gus Caplan | GCL | Deno |
| Jordan Harband | JHD | Socket |
| Justin Ridgewell | JRL | |
| Kevin Gibbons | KG | F5 |
| Michael Ficarra | MF | F5 |
| Mark S. Miller | MM | Agoric |
| Olivier Flückiger | OFR | |
| Rob Palmer | RPR | Bloomberg |
Opening & Welcome
Presenter: Rob Palmer (RPR)
RPR: All right, we’re about to begin. Logged in on the Zoom. Let me bring up my slides. Okay, let’s see, here we go. All right, welcome, everyone, to the 111th meeting of TC39 here at Bloomberg in Tokyo, yay! So a lot of energy in this room. And in particular, welcome to everyone who has flown here. I think people have come from far away, and in particular, I know we’ve got a number of first-time meeting participants, so special welcome to you all. So yeah, we’re here in Tokyo. It seems a particularly attractive destination, and I’ll say that because we were here two years ago, we had 25 people. This time we’ve had I think 43 sign-ups, so that is great, and being in person is a key part of having that as an element of the committee and is a key part for ensuring good relations and ensuring we all understand each other, so I have some fun facts here about Tokyo. Other than it being the largest metropolitan area on earth, the home of modern sushi, and there’s very good reasons for why the Tokyo tower is painted orange, to be safe. Where is the cat bus? It is—I’m guessing it will be related to the studio jibly park or museum. Chris is messing we many. He told me it was pronounced jibly, and I don’t know if gibly or jibly. It’s the ever ending… Okay, it is jibly, thank you. All right, so to help run these meetings, we have the chair group, so that consists of myself, I’m Rob. We have Chris here, and remotely we have Ujjwal. We’re assisted by our facilitators Justin, Daniel here, Dan Minor, and Daniel remotely. I should also say a few things about our facilities here today in Bloomberg for, if you need the restroom, the men’s is down the hallway and there’s a sign pointing to the left, and the women’s is here just by the meeting room exit there. If you do need to take a call or you’d like some audio privacy, the room over there is not technically reserved for us, but if you see it’s empty, please feel free to use it. And then in case of any kind of emergency, the fire exits you’ll see are lit up with green signs over there and over there. Yeah, I think that’s it. And please do take a look at the views out the window. It’s particularly good. Today everyone here obviously has signed in as registered to be in person, but please make sure you also sign in for the Zoom video feed as well. The reason is that that helps with all of our records. We have a code of conduct, which you can find links from the TC39.es repo. Please do have a read of it, and also remember it’s—we’re not trying to follow it to the letter. We’re also trying to respect the spirit of the code of conduct as well. Chris sums it up as be excellent to each other, which I think is a good way to live. Our daily schedule means that the meeting begins at 10, now, and will end at 5 p.m. each day, apart from Thursday, when we will end a little bit earlier at 4 p.m. One thing to note about this—our timings this week is that we have a significant amount of agenda overflow. People seem to have been very keen to bring a lot of work to this meeting, so I appreciate that. And the Chairs are going to try and make sure we get through as much of a as possible. Wherever we finish before the time box, that will always be appreciated, and one of the ways that you can help to think, when it comes to the discussion phase, to try and be mindful of how long your contribution to that is taking, and, for example, whether that level of discussion is the appropriate thing to be focusing on at this—at whatever stage a given proposal is. So we’re really going to do our best to keep to time. So that also means that possibility for continuations, as we have done in previous meetings, is unlikely. We shall see. But, yeah, there’s significantly more on the agenda than capacity we have. In terms of events, I guess a number of people have already been to prior events up to this, so we’ve had TPAC and JS con at the weekend and the TG5 meeting in this room yesterday, which was lent. For the coming week, on Wednesday, we have our social dinner. Everyone who has registered for—as an ECMA member, as a regular delegate here, is entitled to come. Invited experts and so on. I’m just making a differentiation between the observers as well. And please let me know if you are not coming, because the restaurant needs to be precise on numbers. So that will be on Wednesday from 6:30. And you can see there on the map it’s not too far away. In terms of meeting participation and how we work, we have various tools to ensure good communications. The main pool here is TCQ, and that tool has been recently refreshed by Christian here. Thank you so much. It’s one of our essential tools that we’re very proud of and helps us keep the discussion on track. So you’ll see once the agenda is running, when we’re participating, this is the view that you’ll see for the current topic. And you’ll see that there are four buttons you can use to participate in the discussion. Try to keep things linear and orderly, so one at a time. If you are speaking, if you are on the queue and it’s your turn, you’ll see this button, I’m done speaking. We—chairs normally ask people not to use the button. This is the button of do not push this. Oh, okay, all right. Well, there is—TCQ has been updated to remove the button, so temptation has been removed. Thank you, Christian. I assume that was you. And when it comes to those buttons, normally you want to prefer the buttons on the left, so new topic is the most natural and orderly way of adding new topics. If you do want to reply, then you can use discuss current topic. Likewise, with slightly more urgency like clarify something, we treat that with higher priority. And the highest priority one, the red, point of order, is for emergency situations such as note taking has broken, such as the internet is down or some kind of other high priority issue that you want to bring to the Chair’s attention. We also have Matrix for asynchronous and realtime chat. Links to this are on the reflector invite, and the main channel you will want to be joining is TC39 delegates. That is the primary place where work gets done. And then the secondary channel is the Temporal dead zone, which is where work does not get done, and you might see some off topic things. As part of ECMA, we have an IPR policy. If you’re part of an ECMA member company or member organization, and you are a delegate, you will have already signed the appropriate materials there. Likewise, people who have completed the invited expert process, you are equally covered by the forms that you will have signed as part of that. For anyone else, we ask that you do not speak. And please let us know if you’re not in that category and have not already signed the RTFG form, please do so. So that normally applies to observers. All right. We take notes of everything that happens in the meeting. This is transcribed live. We have a transcriptionist. I should check, is the transcriptionist on the call?
Yes, I’m here.
RPR: Excellent. Thank you, Julie. And we also need people to help fix up these notes live. I’ll get to that. And I will also read out our disclaimer that a detailed transcript of the meeting is being prepared and be eventually posted on GitHub. You may include at any time for accuracy, and may also corrections and deletions in the first two weeks after the TC39 meeting or subsequently making a PR in the notes repository or contacting the TC39 chairs. One of the ways you can really help in this meeting, something that everyone will be happy if you do, is to assist us with taking these notes. So this is—this work is really just working on a Google doc that you can get from the invite. And the actual task is only to make small corrections if you see them of what the main transcription is taking. This is what you think will better reflect the content, and another key part of that is the attributions, which is tagging the notes with the three letter acronym of whoever is speaking. You can find everyone’s TLAs, everyone’s designations in the delegates.text, which is part of the notes repository on GitHub, not jit hub, Chris. The next meeting is the 112th meeting. This runs for three days in January. It a remote meeting on UTC -5. The full agenda for the year is published on the reflector as well. So let us begin with one of the most important things, as I’ve said, we look for notetakers to assist us. Please could I get a couple of volunteers to begin that for this session. Straight away, we’ve got Christian and RBR with hands up. Thank you so much. We show our appreciation for the notetakers as much as we can. If there were swag, you would get some. We should also check for the previous meeting. So the minutes have been published and merged, the notes repo. Please can I ask are there any objections to the minutes for the previous meeting? We have silence, so I’ll record that, that the previous minutes are approved. Next up, the—for this meeting and the current agenda that is set out, are there any comments on that? Okay, we have a comment from NRO on the current agenda.
NRO: There are two topics on the agenda for stage advancement, Object.propertyCount and Object.getNonIndexStringProperties. Both going for stage advancement. They both have been presented two meetings ago. They both have been rejected two meetings ago, and there’s been no updates to the proposal, so it is impossible to tell what the committee is going to be asked for. We tried reviewing this proposal, and just we also look at the requests, and the issues, and we couldn’t figure it out. I’m happy to discuss the proposal, but it would be difficult to have a position on the stage advancement.
RPR: Okay, so those items remain on the agenda, but that’s a comment about our regular procedure that we like people—we aim to get all the materials on the agenda, and advertise ahead of time. All right, any objections to the current agenda? No, okay. Then we will adopt that. All right, first up, then, Aki, are you ready with the secretary’s report?
AKI: Yeah, I’m ready. Can I steal the screen sharing?
RPR: Please, go ahead.
Secretary's Report
Presenter: Aki Rose Braun (AKI)
AKI: All right, hello, even. I’m Aki, and I’m here representing the Ecma secretariat. Samina sent her regards and wishes she could be here but incidentally she is receiving an award right now and wasn’t able to attend. So she sends her regards and—oh, yeah, and her chocolate. I have two bags of Swiss chocolate that I have been carrying around since I was in Switzerland in October, and I hope you all really enjoy them, because I’m not taking them back.
AKI: So some notes from the secretariat. The Ecma general assembly is in December, and there are a ton of new standards. There’s a new TC. We also want the kind of talk about some planning stuff ongoing for FOSDEM and JSON schema. There’s going to be a vote today, which I know makes us all itchy, but we’re going to do it, and I hope you all do download these slides and look at them later, because the annexes are something everyone should review, but I’m not going to read them verbatim to you.
AKI: The 2025 December general assembly has 13 new drafts that we are approving as standards. Ecma is, like, at warp speed right now, and I am pretty excited about it. Ecma standardizes a lot of things around technology, and one of things is how much noise computer equipment is allowed to make. Think hard drives clicking kind of thing, or hard drives whirring I should say. That kind of thing Ecma has long been the standard bearer for how much computers are allowed to make. There’s some new ones. There’s also an update to the CD-ROM standard. But perhaps more relevant to this crowd, ECMA-424 is going to its second edition, CycloneDX v1.7. That’s going to go to ISO-IEC/JTC1 for Fast Track approval and that will be an international standard. And TC54’s PURL package URL, which doesn’t have a number yet, is also going to go to ISO-IEC/JTC1 for fast track submission. And the common life cycle enumeration from TC54, spearheaded by JHD, is also going to be approved at the GA. There’s a whole bunch of new AI natural language interaction standards that—and this is surprising to none of you—I have not really looked into that much, because it’s not really my thing, but I’m sure it’s great. And then the Minimum Common Web API from WinterTC is going to be approved—so WinterTC is going to have its first standard, congrats to ABO (and LCA who is not on the call because it’s a terrible hour right now). So we have a whole bunch. It is more than I’ve ever seen. We have a new TC that’s the HLSL technical committee, high level shared language. Microsoft and I believe Apple were the main impetus, but we also have Meta and Sony and Google participating and UC Santa Cruz. If any of you all have any part of your company who might be interested in this, you should let them know this is happening, because I already think we have the players for it to be successful, and, therefore, everyone should make sure that their voice is heard so it’s not just the biggest players who get to say what this specification looks like. Oh, yeah, and also this is going to be another royalty-free technical committee. So almost every TC that’s been chartered in the last ten years, 11 years, maybe every TC has been royalty-free. Nice influence, TC39.
AKI: Outreach at FOSDEM 2026 in Brussels, we expect there will be an Ecma session and expect to work with OWASP to have some sort of event regarding software transparency. If anybody is attending FOSDEM, you should let us know. If anybody wants to be involved in anything Ecma- related at FOSDEM, you should let us know. I will be there, Samina will be there. And I think several people from TC54 will be there.
AKI: We continue to discuss the JSON schema task group for the approved and not yet chartered Schema Language & Structured Data Technical Committee. We’re just making sure that everybody has signed the IPR agreement and everybody feels comfortable with what they’re agreeing to—which is not giving up anything, but rather allowing Ecma to use their copyright and use their license. And we’re also working directly with IETF to make sure that any standardization that happened on this in IETF can also be ongoing without any IPR issues, for which IETF has been super supportive. They’ve been great.
AKI: And so as I mentioned, make some time to check the annex. We go through the code of conduct, the invited expert policy—I think everyone would benefit from being familiar with it for the next time an invited expert invitation comes up. The annex also includes a list of the latest documents. There’s so much that the GA publishes that it’s valuable to read through the titles and just see if any of this matters to you, especially if you know which TC is which, which I understand if you don’t.
AKI: Yeah, lots of action going on in ECMA right now.
AKI: And the chairs have just published the 2026 schedule in the last two weeks. For in-person meetings, the current plan is that Samina will be in New York, we’ll both probably be in Amsterdam. Tokyo next year is not yet decided.
AKI: Okay, now on to the vote thing. The last two versions of ECMA-262 and ECMA-402 did not have the alternative copyright notice and copyright license, which is what we have in those documents to allow people to fork. We hope nobody wants to fork it, but we don’t want to stop anybody from doing so. What we need to do as a committee is vote to approve a corrigenda for each of these standards. We will publish an update that’s just a cover sheet and the one page we are changing with the old copyright and the new copyright. I think we should—yeah, okay, I’ll go through it quickly
AKI: I know we all get itchy when it comes time to vote. I will try to make this as painless as possible. Every member gets one vote. That means that all the delegates of a company share one vote. Observers don’t get to vote. Invited experts don’t get to vote but they can certainly lobby the members as much as they want. And it’s a simple majority. So I’m not concerned about it passing overall, but if anybody wants to get their disagreement on the record, they can certainly do so. If for some reason we can’t agree to vote, we’ll do a postal ballot. I don’t think that’s going to be a problem. We need to vote on all four, but if there isn’t any dissent, we can vote on them all at once. So, chairs, if you would be so kind as to formally call a vote on approving the corrigenda.
RPR: With a clarifying question from Eemeli about whether—please.
EAO: Could we just, like, not and just ask for consensus, which counts effectively as everyone consenting and unanimously voting.
AKI: That’s why we start by asking for dissent.
EAO: If we don’t dissent, we don’t have to do all the complicated stuff? Cool.
RPR: I think we’re all on the same page, yes.
ACE: I want to do a postal ballot
AKI: All right, Ashley, you can write us a letter.
RPR: All right, well, would you like to display—you have a slide with all four items.
AKI: There we go. Here is what we’re voting on.
RPR: Okay. So the first call here is do we have any objections to approving all four items on this slide? Okay, we have explicit support from Chris. No objections, so I will say that this is approved.
AKI: Great. Vote complete. I really wish I had a sound board and one of those “that was easy” buttons.
RPR: Yeah, thank you, Aki.
AKI: Thanks for your time, everyone.
RPR: There’s also a question from NRO.
NRO: Yeah, 426 already has everything correctly done? There is nothing to change there?
AKI: Correct [editor’s note: AKI was wrong.]
RPR: Aki, is that everything from your section? Perfect. Thank you so much.
Speaker's Summary of Key Points
- Ecma has had a busy year, and intends to publish 13 standards at next month’s GA
- Including standards with contributions from TC39 delegates JHD, ABO, and LCA
- TC57 HLSL is getting off the ground, make sure your employers know
- FOSDEM 2026 is coming up, Ecma will be participating and wants to collaborate with anyone on the committee who may be present
- Chairs have published 2026 TC39 meeting schedule
- ECMA-262 & ECMA-402 corrigenda need voting
Conclusion
- Committee voted unanimously to accept all four corrigenda on 2024 and 2025 copyright notices.
ECMA262 Status Updates
Presenter: Kevin Gibbons (KG)
KG: Excellent. I’m somewhat sick, so forgive me if I’m difficult to understand. Updates since last time, not a very long list. We’ve landed a number of normative changes, but they’ve all been bug fixes and similar. I’ll go through them just for the sake of going through all of them
KG: So we had a bug fix in String.prototype.lastIndexOf that was introduced during an earlier refactoring.
KG: There’s this change to copyWithin where RGN has noted the specification did not match reality, and we landed that change per consensus at the last meeting.
KG: We also landed this change which was arguably editorial, where previously it was implied to the global object was required to be exotic, which was just a weird thing to do, because the only thing that exotic implies is that the doesn’t use all and only the ordinary object MOP specifications, but there’s no reason to require that of hosts and there’s a loophole later that would allow you to have an ordinary object anyway. We landed this thinking it to be something that wouldn’t require consensus, even though it is arguably normative, which is why I’m calling it out here.
KG: PR 3700, MF I believe noticed that when we landed BigInts in the spec and introduced the BigInt type, we introduced the [[IsLockFree8]] attribute, which is consistent with [[IsLockFree2]] and [[IsLockFree4]]. And the other pre-existing ones were required to be consistent within an agent, for the lifetime of the agent, and that was not true for [[IsLockFree8]], which, A, was never ever going to happen—no one is running an architecture which is only sometimes lock free for 8 byte types—and, B, was 100% just an accident. So we landed that without asking for any consensus on the assumption that this was always the intent and of course is web reality.
KG: And then this last one is that there was a bug in the module handling machinery, which I believe Nicolo noticed, that was just a failed assert that was now corrected.
KG: And a handful of notable editorial changes. The first one, thanks again for RGN, a number of improvements to the TypedArray machinery in various places. Which hopefully will just clarify or make more consistent a bunch of that stuff. And then this last one is something that we have been working towards for a very long time, although only incrementally, which is that you may be familiar with the ! and ? macros in the spec which are used for asserting that a potentially abrupt operation is not abrupt, and unwrapping a potentially abrupt operation or propagating the abruptness to the caller, respectively. You may not know that question mark was defined in terms of this ReturnIfAbrupt macro, that macro had been used in a number of places historically, but through a series of refactorings made for other reasons, we did ultimately remove all uses of ReturnIfAbrupt, which has allowed us to specify ! and ? directly. This also allowed us to fix a thing which has caused a number of people confusion over the years, which is that the specification of ReturnIfAbrupt was bad and wrong. It didn’t actually correspond to how it was used in the spec. And now ? and ! are specified directly and hopefully that will be clearer for readers in the future. It shouldn’t actually affect anyone, because no one was going to read return if abrupt.
KG: Also a couple of meta points of business. First I wanted to call out that Aki has been on a quest to make the spec more standards compliant, as it were, and as part of that, she's added the correct copyright metadata. Previously the spec had the wrong copyright, a copyright not approved for use in ECMA documents, and now has been updated to match the ECMA copyright.
KG: And also PR previews are working again. They were broken for a while. They should be working. CI now posts a link to a preview. However, to get a preview for a pull request, you immediate need to either be a member of TC39 on get must be or manually add the "request preview" label, if you have permissions, or otherwise ask anybody to do it for you.
KG: Okay, last thing is that I wanted to call for interest in editorship, because SYG is going to have less time going forward, MF potentially has less time going forward, and I may or may not continue as editor depending on my employment status. Anybody who is interested in being an editor, now is a particularly good time. And if you think you might be interested, the editor calls are open. You can find them on the calendar. There are currently 2 p.m. Pacific on Mondays or feel free to talk to any of the editors, either in person, if you happen to be there in person—I believe Michael is the only one in person,so talk to Michael if you happen to be in person. Or we're on Matrix, happy to chat about editorship.
KG: And very briefly, I want good call out a little bit of upcoming work. We are probably going to soon introduce a CI warning for when you are introducing a phrasing which ESmeta does not know how to parse. It shouldn’t block a pull request from landing, but if you see that warning and wonder what it’s about, it’s just that you’re using a new phrasing and possibly means there’s some other standard phrasing you should use instead. Possibly you chose the best way to say it and you should ignore that warning.
KG: And finally we would like to do some work to make the spec more linkable in general. And I believe someone, NRO maybe, was pointing out the concrete methods could use better linking, so we’ll hopefully have something to do with that soon.
KG: That was a lot of talking very quickly. Any questions or anything?
RPR: Nicolo?
NRO: Yeah, when do you want the interest to be expressed by? Is there, like, some deadline you need?
KG: Chris, do you know the answer to that question? I believe it’s January.
RPR: Ideally some time this year in order for us to do elections the next meeting, which is the 20th of January. We will put out formal requests—sorry, reflector posts on both the editors and the chairs group. Okay, I think that’s all—anything else, Kevin?
KG: That’s all I had. Thanks very much.
RPR: Okay, thank you. Please, can you review the notes to make sure that the summary and conclusion are correct.
Speaker's Summary of Key Points
- Normative changes are all bugfixes
- Copyright on the living spec has been changed from something bespoke to an actual ECMA copyright
- Anyone interested in editorship for next year should express interest by the end of the year, and anyone even potentially interested should feel free to attend editor calls before then
ECMA402 Status Updates
Presenter: Ujjwal Sharma (USA)
USA: Hi, yeah. For 402, there’s two new normative pull requests. One of them we’re going to see later today. That’s 1032. But the other one, 1035, it’s a routine change that will probably hold off on until 2026, so, yeah, not much new.
RPR: Sorry, what was, that Ujjwal?
USA: That was the update.
RPR: You are complete.
USA: Yeah.
RPR: Okay. Are there any questions for Ujjwal? Nothing in the queue or in the room. Okay. Then we are done. Chip?
ECMA404 Status Updates
Presenter: Chip Morningstar (CM)
CM: So 100 plus years ago when Einstein published the theory of relativity, it spawned an entire cottage industry in bad sophomoric philosophy about how everything was relative, but in spite of that, this is really a theory of invariance that says the speed of light is the same regardless of the observer. And so it is with JSON. Which is unchanged with respect to time or location. And so even though we are all in Tokyo, nothing in ECMA404 has changed.
RPR: Thank you, chip. I think that deserves a round of applause. Mark has something else to say about this.
MM: So invariant you stated is the invariant that sort of drove things, but calling it the theory of invariance, the invariant he had in mind was space time distance, X squared plus Y squared plus Z squared minus T squared.
RPR: Thank you for that clarification. I’ll also say, Mark, your audio is quite low quality. I think we still got it, but if there’s anything you can do to improve that for the rest of the meeting, that would be appreciated. Okay. ECMA404 rests. Thank you, Chip. Onwards to test262 status update with Philip Chimento. Philip, you are there?
Test262 Status Updates
Presenter: Philip Chimento (PFC)
PFC: We had compiled a list of bullet points in the last maintainers meeting, and then RGN just helpfully sent me a couple more with what’s happened since then, so thanks very much, RGN. I appreciate that.
PFC: One update is that we are welcoming back Mike Pennisi who is working on publishing test manifests for web features in the WebDX world. There’s some stuff that you can read about on the issue tracker if you’re interested in that, but otherwise it’s just a notice that this is happening. We’ve landed some tests around async module loading, immutable ArrayBuffer, and Intl era month code. And there’s something I’m also going to highlight in the Temporal item later, where we added a lot of coverage for edge cases using a new snapshot testing technique, so more about that later on.
PFC: And as usual, we'd like to ask, anything that you can do to help would be greatly appreciated. Right now we have a couple of tests for explicit resource management (using and await using), and RBN wrote these quite a while ago, and they’ve been mostly reviewed at various times, but there are still a couple things that need some eyes on them. They’re fairly large pull requests, so anything that implementers might be able to do to help us out by, say, making sure that the tests align with your implementation, and flagging things that don’t, would be very helpful. Shout-out to Tooru Fujisawa who did this for Firefox, but, yeah, we could use some help for that on the—well, particularly the using and await using PRs, but also anything that’s been waiting for a while, we do really appreciate that kind of help. It does really move the needle on getting things merged, because it means that we as the maintainers don’t have to do a deep dive as much into the proposal.
Speaker's summary of key points
- Integrating test262 into web-features (https://web-platform-dx.github.io/web-features/web-features/) (More info: https://github.com/tc39/test262/issues/4602)
- We have landed a bunch of tests for async module loading, immutable ArrayBuffer, Intl Era Monthcode
- Added many tests for Temporal edge cases using a snapshot testing technique, more about this in the Temporal item
- We appreciate help from implementors on checking that open PRs with tests are correct for their implementation of a proposal, and flagging when that is not the case. Help is particularly appreciated on the remaining Explicit Resource Management PRs right now.
TG3: Security
Presenter: Chris de Almeida (CDA)
Next up, we have the TG3 update. CDA, are you ready for this?
CDA: Sure. The threat actors of TG3 continue to meet weekly to craft malicious payloads and discuss the security implications of proposals. If you are interested in either of those things, please join us weekly on Wednesdays. Thank you.
RPR: Thank you, CDA. After TG3 comes TG4. Nicolo, will you be giving us on update on source maps?
TG4: Source Maps
Presenter: Nicolo Ribaudo (NRO)
NRO: Yes, no slides. Just two quick things. One is that we have spec text for the range mappings proposal. It’s linked in the range mappings, and explained there in the ECMA 426 if you want to look at. And second thing, we had service maps session with the work group, and there was interest from multiple people there to work with us, so we’re going to figure out how to make that happen. And that’s it.
TG5: Experiments in Programming Language Standardization
Presenter: Mikhail Barash (MBH)
RPR: Okay. Thank you, Nicolo. And then Mikel, you are ready with TG5 programming language and experimentation?
MBH: Yes, with continue with monthly meetings where we discuss various topics that range from sort of statistical analysis of proposals to mechanisation of proposals in proof assistants and we also arrange workshops. The most recent was arranged yesterday. And we had two presentations from two persons from local universities here in Tokyo, and one presentation by a TC39 delegate. So it was a quite nice event. I think we’ve had 19 participants, which was one of the largest workshops. And we would like to make an announcement for one of the workshops that will be held next year co-located with plenary in Amsterdam. It’s 22nd of May at University of Delph in Netherlands. That’s it.
Updates from the CoC Committee
Presenter: Chris de Almeida (CDA)
RPR: Thank you, MBH. Over—sorry. Yes, we did. So, CDA, over to you for the code of conduct committee updates.
CDA: Yes. There are no updates from the code of conduct committee. No news is good news.
RPR: And are you looking for new members?
CDA: The one time I forget. Yes, if anyone is interested in joining the code of conduct committee, please reach out to one of us. Thank you.
Normative: In PluralRules, set compactDisplay only if notation is "compact"
Presenter: Frank Yung-Fong Tang (FYT)
RPR: Thank you, Chris. Next up is FYT. FYT, are you there? This is to discuss normative in plural rules set compact display only if notation is compact. Does anyone from Google know if FYT is on the line? There we go, so, sorry, you’re on Zoom, FYT. We see you now.
FYT: Is it my turn?
RPR: Yeah, your turn now.
FYT: Sorry, I just have some problem. Zoom asked me to quit. Let me bring up the agenda. First time using new computer. Sorry.
RPR: We can see your desk top, yes.
FYT: Okay, let me bring up the slides. It forced me to—somehow it forced me to out, so let me show. So can you see my presentation now?
RPR: That looks good. It’s full screen, yes.
FYT: Great. So I’m talking about PR1032 for MR402. It’s a small normative PR. Basically what, give you some background, currently Intl.NumberFormat for very long time have option called compactDisplay. It’s read unconditionally, but it’s set into the internal slot conditionally, only if notation is compact. And that is by design, because the compactDisplay mean nothing when ever the notation is not compact. Okay? So the consequences later on, the particular slot actually have three possible value, one is undefined, one is short or long. But on the other hand, we have Intl PluralRules, which recently added this thing, and we somehow, again, do the right thing, unconditional read that compact display. And we actually unconditionally set into the internal slot, so what happened really is that we have some inconsistency here. We added the notation in PR89 not long ago and also added a compactDisplay in PR1019. The problem really is that whenever we later on called resolve option of compactDisplay, in the NumberFormat, it may return three different value, undefined, short or long, but for the plurals, they can only return short or long without an undefined, but it doesn’t really make sense whenever other notation is not compact. So this is really the problem, and the real problem is really whenever we find this out when we dry try to implement this in V8, but essentially for efficient Ron, question actually utilize some of the internal structure to implement the result option value for that. So if we want to return to maintain the thing whenever the notation is not compact, we have to have some extra memory to do that, and also it doesn’t make sense for some useless method. So we—and also it have the problem to be inconsistent with NumberFormat. So our PR is really just change it to align with what the NumberFormat do, is we still read it unconditionally, but we also do the same thing as when the NumberFormat did, is we conditionally set it into the internal slot only if the notation is compact. So that’s basically this PR is very small PR. We change the setting under the condition if condition. Any questions?
RPR: There are no questions, but Dan is on is on the queue.
DLM: Yes, thank you. And I’d just like to say we support this normative change.
FYT: Can I ask from the consensus for this? Is there any objection?
RPR: Shane?
SFC: You can hear me? Okay, great, yeah. This was discussed and approved in the November TG2 meeting. And, yeah, we approved it there. It’s the third change—it’s the third pull request in the area for about six months, so hopefully the last, but because three makes a nice round number, so I hope that it will be the last time you’re seeing a pull request in this area, and also that now—and I hope you’re not back again in a few more months with another pull request here. And yes, thanks, FYT for identifying this in specification and fixing it.
RPR: Okay, we haven’t heard any objections. And so in the last few seconds, yeah, I’ll say Congratulations, you have consensus.
FYT: Thank you.
Speaker's Summary of Key Points
- Need to change the spec for
Intl.PluralRulesto set compactDisplay only if notation is "compact" in order to align with Intl.NumberFormat
Conclusion
- TG1 approve the PR
Normative: make re-exporting a namespace object in 2 steps behave like 1 step
Presenter: Nicolò Ribaudo (NRO)
RPR: All right. Let’s move on. So, yeah, thank you, FYT. And then we move to Nicolo with normative change of make re-exporting and namespace object in two steps behave like one step. Oh, okay. Don’t put it on the Matrix. We do not—yeah, all right. I’ll get the Zoom link. All right, Nicolo logs into Zoom. No problem. Is it going all right or shall I fill time with—no, it’s not needed.
NRO: Yes, okay. This normative pull request is about inconsistency in the spec as we notice where the spec is just—does two different things in two very similar cases, and sometimes that happens and it’s fine. But the reason I’m presenting this is because also browsers, in that case, do not match the spec. So just a bit of context how expert star works. Expert star, just like re-exports all the bindings from some other module, and in example here, in the library file, we don’t really know looking at that file what bindings are being exported from it too. So in this case, it’s kind of exporting foo twice, because it’s exporting foo from left and right. The way we deal with this when I try to import foo from library, so in the file at the top, that’s a linking error because it’s ambiguous to which of the two foo we’re referring. And the same happens—so notice here, just the change in the file right, the same happens even if the variables have the same value, because we’re not checking if these two food have the same value. We’re checking are they the same value. And even if they have the same value, it’s still an error. Now, if left and right are not declaring valuable foo and instead exporting variable foo from the same place, there’s no ambiguity in which foo our entry point is importing. Yes, sure it can go through two different paths, but they both result in the same underline binding. And the same happens if we split the export from the declaration in two. So if left and right do import 2350 and export foo in, is still not creating a local foo variable in left and right. So from entry point, we pierce through and go all the way from both sides of variable and end up. The same happens when left and right use export as their namespace, so when we are importing an S, like, in entry point, again, it could go through two different paths and it’s clear we’re getting to the namespace of the dep module, so there’s no error here. However, if we split the export star as the declaration into import and export, we start errorring. And the reason we start errorring, is technically now left and right are declaring a local variable called NS whose value happens to be the namespace object. They’re not, like, pointing directly to the namespace object as a binding. And the proposed change is to actually make this not an error anymore so that we—so that import star as a namespace acts the same as an NS. And getting the same equivalence and we have for named exports. This what this spec text looks like. You can see at the bottom of the screen shot, that’s how we’re—this like in a loop that goes through, like, all the things exported by module, and what’s happening at the bottom of the screen shot is when we have an import foo by an export, we are creating like a VR12 record to report represent an export, and the same is for when we have an import star, import dub. And this is how browsers currently behave. So SpiderMonkey and JavaScript core actually do match the spec. So allow export star and then for import star followed by export, and V8 export and engine 6 do not match the spec, as in they both making error and the proposed change would be to actually make both okay. If we do not get consensus on making both okay, my personal preference here would be to then do what V8 and XS are doing, preserve the give length, which is useful for tools like mini fires, and all tools right now just assume that those to things are equivalent in all cases. And, yeah, I see Jordan raising a hand. You have some people on the queue.
JHD: So this is clearly something I—like, an area I don’t fully understand. My—I saw your PR, like, a week or two ago, and I thought it was doing something totally different. So export star as NS, you’re proposing that that do what exactly?
NRO: I’m proposing that—the only behavior it changes is whether in this case here, we detect that the 2NS exported by left and right are actually the same.
JHD: Okay. So just to—let me try a different tack. What I think is happening here is both left and right have a named export called NS that is a Moose Jaw equal namespace object that comes from dep, and you’re saying that—you’re talking about the caching behavior or something?
NRO: No, I’m talking about—so this convert now, it’s equivalent as having—okay, so let me first go back to the previous slide. In this case here, the variable that contains the NS object is actually not stored, the binding is not defined either in left to right. The binding here is defined on dep. It’s as if dep had like a hidden variable declaration that contains namespace.
JHD: You’re saying this is currently an error because you’re trying to combine the two NS.
NRO: Is not on error, because imagine there’s a hidden variable inside the file dep that’s const NS and the name pace object and left and right are exporting that same variable. However, what’s happening here is that in left and right, we’re declaring a new local variable whose variable is the namespace. And here we have two namespace variables, one in left and right, and they point to the same value, and the to proposal is we don’t throw and if we’re not doing const NS from the file, we’re just actually re-exporting NS from that file.
JHD: Okay.
JHD: So the thing you’re making okay, that currently throws, is just an alternate form of the previous one, which is just saying—
NRO: Yes.
JHD: It shouldn’t matter that these are both the same objects, just merge them.
NRO: Yes, and to clarify even more, in this case here on screen now, here left and right are not declaring a local foo. They’re just re-exporting it. If left and right had const foo equal from foo dep, both them, this would be an error because they’re two foo variables that happen to have the same variable.
JHD: But if I console log foo in left and right.
NRO: They have the same value, but they’re two variables that have the same value.
???: That statement doesn’t matter.
NRO: The only way you can observe whether two variables are the same or not, you can see Whether they throw.
MAH: We support making this consistent, it should all work whether it’s a namespace export or just a regular exported binding. My question was how much test coverage do we have especially around—
NRO: We have test coverage.
MAH: Yeah. In particular we need coverage around aliasing, in one path alias to one name, and then in another path to another name, and further rename both to the same name.
NRO: Yeah. I’m working to make sure we have testing for those things
MAH: Thanks
GCL: Hello. Yes. I also support this change. As with many things in the modules, part of the specification, the history for why they are are sometimes muddy, but it’s clear this is not an intentional divergence in behavior and it’s good we are fixing it.
RPR: Dan?
DLM: Yes. We also support this change. I wanted to also thank you for going out the slides and a clear explanation of the issue.
NRO: Thank you.
NRO: And the two—I don’t know V8, but two that match the spec. I checked two lines changed to implement this.
NRO: The queue is empty. I heard no concerns. So I think I have consensus on this. Is there anybody who still wants to say anything?
RPR: Congratulations, you have consensus.
NRO: Thank you. And thanks, KG for working on the actual pull request for this when I was not able to do it.
RPR: Thank you, Nicolo.
Speaker's Summary of Key Points
- Currently, there is a difference in how
export * as ns from "foo"andimport * as ns from "foo"; export { ns }are handled when checking for ambiguous bindings inexport * fromstatements. - The discussion proposes to change the latter to behave like
export * as ns from "foo", effectively removing some "ambiguous binding" errors
Conclusion
- The normative PR has consensus.
Normative: Extend JSON.parse and JSON.stringify for interaction with the source text of primitive values
Presenter: Richard Gibson (RGN)
- proposal
- no slides presented
RGN: Let’s see. So… As mentioned, this is a request for Stage 4 advancement. It’s actually been quite a while since I presented JSON.parse with source. And in the meantime, we have implementation support in—checking now… V8 and SpiderMonkey and JavaScriptCore. We also have the tests that have merged a while back, which are mostly good, there’s still a few gaps with respect to the test plan which I am working on shoring up this week. And we have a pull request against ECMA262 proper, to implement it. There are no remaining issues on the proposal repository, as of about a week ago. And as far as I know, this is looking good. Everything I have just covered should include the requirements of the process document. So hopefully, this is pro forma. But as always, the queue is open.
RPR: Dan Minor (DLM) Has support for Stage 4. End of message. Chip with a strong + 1. So we have heard support.
RPR: Are there any objections to Stage 4 for JSON.parse text access? There are no objections. Therefore, congratulations, RGN, you have Stage 4! Yay.
Speaker's Summary of Key Points
JSON.parsesource text access is shipping in JavaScriptCore, SpiderMonkey, and V8- tests have merged and will be improved imminently
- there are no open issues
Conclusion
JSON.parse source text access is at Stage 4.
Temporal status report and normative PR (slides) (30m, Philip Chimento)
Presenter: Philip Chimento (PFC)
PFC: I am Philip Chimento, I work at Igalia and we are doing this work in partnership with Bloomberg. As promised, a progress update. There are now two implementations that pass 99% of the Test262 tests. At least, there was 2 weeks ago when I measured this. We've merged a lot of tests and the numbers I'm showing are current as of two weeks ago.
PFC: I want to talk about how we hunted down implementation divergences using a snapshot testing technique, because other proposals might find that useful. And then, to close out, we have one normative change to propose, which I found using snapshot testing.
PFC: So here’s the fun little graphs. As usual, the disclaimer that 99% conforming to the tests does not equal the same thing as 99% of the work done. This is just a rough indicator. Sometimes it goes down in between meetings because we add new tests for edge cases and found out implementations have been susceptible to the edge cases. Sometimes it goes up because we add more tests for edge cases that the existing implementations already handled correctly. Sometimes it goes up because engines fix bugs, of course. You can see we have two at very close to full conformance and a few more hovering around the high 90s.
PFC: I am going to talk a little bit about our plans for moving the proposal to Stage 4. The V8 implementation is now unflagged in trunk which is scheduled for a beta release on December 3 and a release to production in mid-January. So at that point, we will have two implementations shipping without a flag, which hopefully by then we’ll conform to the acceptance tests as the process document requires.
PFC: We still have some tests in Test262 in staging. By the time we request Stage 4, we move those to the main Test262 tree and expand as needed. We have a couple of issues open with identified gaps in test coverage which need to be filled. When that’s done, we will plan to request Stage 4, going together with the Time Zone Canonicalization and Intl Era Monthcode proposals—that’s a good opportunity to move them all at once.
PFC: Right. So the testing technique I said I would talk about. If you’ve been following the proposal repo, there have been a number of files added that introduce this snapshot testing technique. It’s based on taking a list of so-called 'interesting' inputs and just basically testing all the combinations of them, which works out to, you know, hundreds of thousands or in some cases of millions of combinations. For example, we have one test that takes the difference between every pair of dates in a four-year period in the Gregorian calendar. It’s impossible for a person to go through all of those and figure out what the answer should be. So for this technique, we don’t actually have to care what the answer should be. We just care if the answer changes between implementations or changes after you merge a pull request. So these are not correctness tests; they’re more intended to raise a flag if something is wrong: flag it when the results are different between implementations or before asking after a pull request. Flag it when an assumed invariant does not hold for one of the millions of combinations of inputs. Or flag when a combination of inputs causes an assertion to fail, which is how I found the bug that we’re requesting a normative change for later in this presentation.
PFC: If you are interested, you can check it out. You can follow the link here in the slides. And I made a script that makes it easy to run it on your implementation, as long as it supports JSON imports. If it doesn’t, you can probably still run it, but you are on your own to get it to work with the loader or write your own loader using your own I/O facilities.
PFC: I wanted to highlight this in the presentation because I think it was incredibly useful for Temporal and will probably be useful for other proposals as well. So just to show you how much worth it, it was, this is a list of the bugs that we found. So we found a spec bug, more on this later in the presentation. Here’s all the bug reports we opened in implementations. Very sorry, I haven’t opened issues yet in GraalJS, but I plan to do when I have a moment. And just to re-enforce that the snapshot tests are not intended to be part of the correctness suite, but just a flag for when something is going wrong: we also added specific test coverage for all the bugs to Test262 now they are part of the actual test suite and any new implementations will have to pass those tests.
PFC: I am planning to continue to do more of the snapshot tests, around non-ISO calendars with respect to the Intl Era Monthcode proposal. You may find this interesting if you are working on a different proposal. If you would like to talk more about it, feel free to drop me a line and I can walk you through how I did it for Temporal.
PFC: So on to the bug in the normative change. I found an assertion failure in duration rounding. So this bug pops up when you have a duration with a date component (for example, 1 year here) and the time component (for example, one hour here.) You do rounding to a calendar unit. You have a date relative to which you round, which is an end-of-month date. Just without going into too much detail, the duration rounding uses a bounding algorithm where construct two bounds that are multiples of the rounding increment. In this case, the rounding increment is one year. And you make sure that one is shorter and one is longer than the given duration. And then you round based on which one is closer.
PFC: So with this particular combination of inputs, that didn’t work. The algorithm generated bounds where the given duration wasn’t in between them. And, of course, as you can probably guess, that leads to an assertion failure and also, you know, even if you have a release build where you disable assertions, that leads to wrong answers. So there is a normative pull request here to fix this. Thanks to Tim Chevalier for fixing it. We also had a bunch of help from Adam Shaw.
PFC: Usually we say, in fact, I think it explicitly says in the spec, that if an assertion doesn’t hold that’s an editorial error in the specification. But in this case, you know, the fix is actually quite involved. It’s not just like a typo in the spec text. And, you know, code has shipped to the web that gives an incorrect result in release builds of Firefox. So I think this warrants a normative PR.
PFC: Hopefully that went quicker than the timebox. I would like to ask questions now.
RPR: Are there any questions? The queue is empty.
RPR: All right. Then if there are no other questions, nobody puts one on the queue at the last moment. I would like to request consensus on the normative PR that I just presented.
DLM: I am supporting the normative change. And thank you.
RPR: Chris also supports.
RPR: So we have heard support. Are there any objections? No objections. So congratulations, Philip. You have consensus for the normative PR.
PFC: All right. Thank you very much, everybody. We will go on to merge it and onward with the tests.
Speaker's summary of key points
- There are two almost completely conformant implementations, one still flagged. We outlined a path to stage 4 for the proposal and listed the blockers.
- We presented a testing technique which has been successfully used to find and fix differences between implementations.
- We presented a spec bug found with the above technique and a normative change to fix it.
Conclusion
- A normative change to address an edge case in
Temporal.Durationrounding (https://github.com/tc39/proposal-temporal/pull/3172) reached consensus.
Error.captureStackTrace for Stage 2
Presenter: Daniel Minor (DLM)
DLM: And the slides are up
DLM: I am sorry about that. I was not entirely prepared. Here we go. Sorry about the delay. Yeah. I would like to talk about Error.captureStackTrace again. So a quick reminder of the motivation. So here is some stuff from the NPM page. But basically the Error.captureStackTrace static method, and stack trace information, it’s not normally Error objects but things that are created to look like your objects and would like to have a stack.
DLM: A quick reminder of history. In the depths of time, Chrome shipped this. There’s evidence as early as 2015. More recently, JSC shipped this and SpiderMonkey did as well. In that case because we started to see web compatibility problems. So unfortunately in a situation where we have three engines shipping two different things, and no standard, so this proposal reached Stage 1 in February. I presented in the July 2025 plenary we discussed in redeal number 8, which causes observable changes to prepare a stack trace. So this is another Chrome API that we are not planning to standardize. But it is a web reality. My initial intent to attempt the specifying was to a DataProperty we heard feedback from the V8 team that is not acceptable. Because of the behavior of preparing a stack trace. I also did a little bit of work with one of the suggestions that came out of the July plenaries, using closures and feedback from other members of the committee that they refer to getter and setter to be identical across all. So my latest attempt and getting feedback on today is a version that dynamically adds an internal slot.
DLM: So the proposed behavior here is that Error.captureStacktrace would… then used by the getter to value that slot if it’s present or otherwise undefined. I would like to point out this is the only place in the specification we daytimely add an internal slot. This is allowed if expressly said. And so yes. The getter—the setter does something different. What it does is override the stack property with the object provided as argument. What this does, replace the getter-setter pair. The accessors with a DataProperty. And this is in response to other feedback we like to avoid any kind of internal communications channel. One thing that is part of the proposed specification there is… part of the reason for this is that with the proposed specification, the error stack setter overrides with a DataProperty anyways. And also, this matches what JS assessment and SpiderMonkey have already shipped and V8 shipped this for a number of years as well before they used the DataProperty as well. Both of these are web compatible. I have a quick—not quick. Here’s the proposed specification text. I wouldn’t say this is necessarily complete or corrupt. I am trying to Stage 2 here. Things to be improved are acceptable. So briefly, what I am trying to do here is first make sure things are an object. If there’s a limit to specify, this is part of the existing API, what that does is, partially censor the stack. This isn’t a security mechanism, but a convenience for users. In this case, either way we end up with a string. Then we will have the choice of strategy beginning on whether we are using DataProperty or accessor. The DataProperty version installs the DataProperty. If it’s an accessor, then what we do is install a getter and a setter as I mentioned before. So I have a version of the proposed getter and setter as well. Getter will return the value internal slot if present. Otherwise, undefined. And the setter as I mentioned would override itself with a simple DataProperty.
RPR: On the queue, Mark has a clarifying question.
DLM: Yes, please. Go ahead, Mark.
MM: Yeah. It’s actually just a clarification. You said that the only operation that adds a property—internal slot, adds an internal slot
DLM: I had a hard time finding something else in the spec
MM: This is not the thing that has the property. It’s the thing that—subclassing that adds a property.
MM: It’s the constructor change.
DLM: Okay. Thank you. It’s good to know there’s some precedent for this proposed behavior.
DLM: So I don’t have the queue invisible, but I am happy to hear any questions or comments
MM: What about non-extensible objects? This would fail; correct
DLM: That’s correct.
MM: Okay. Great.
RPR: Please use the queue in future Mark. Thanks
JHD: Yes. So I would prefer no choice in strategy here. The only shipping—shipping this as spec is better than the current reality for sure. Locking down, you know as many semantics as possible. Both approaches a web compatible, the only benefit to shipping this is that to allow the choice rather is that web browsers don’t have to do anything. All the browsers are just like cool. We are already good. More or less. I think it’s reasonable to ask browsers to make a change to have—to avoid implementation-defined choices. I don’t have a strong preference as to which option is selected. But I would really prefer to see us just pick one unless there’s a compelling reason not to
DLM: No. That’ fair feedback. We are more or less limited to accessor because of the accessor approach because there’s substantial use of the V8 APIs. In terms of allowing both here, do you think there—or not allowing both. I guess, yeah, do you see an advantage to like users of the web or developers by only specifying one here or is this an argument around just kind of keeping the specification nice is tidy
JHD: I would say both. I think anything that is not fully locked down in the spec is something that library authors and just generally people in the ecosystem have to deal with permutations. Like, if it’s—saying that V8 can’t change away from accessor, like another alternative I suppose could be that step 7 is marked as legacy, meaning like nobody new should do this. But we acknowledge that somebody will do it. Right. Because then that says all future implementations will pick step 6. That would be fine too. Or that would be better for me than this current strategy. This says you can build new implementations with either approach and that is like not stopping the bleeding. Whereas, you know, either picking one or at least putting big frowny faces on one of the two is a I think a better approach.
DLM: Yeah. Okay. That’s fair enough. I think I mean, I guess I would like to hear what you—how do you feel about the argument since calling the setter installs a DataProperty, people will have to expect the DataProperty there. Anyways…
JHD: That is true. Yeah. I mean, I guess—I said I don’t have any strong opinions. I do have some much weaker opinions that like I don’t like the accessor approach at all. I prefer it DataProperty. It’s simpler and straightforward. But I am not going to argue with V8 about it.
DLM: Yeah. I mean, yeah. Obviously, we prefer not to have to rewrite our DataProperty to be accessors. I assume JSE is in the boat. It would be nice to hear from the other implementers about this.
RPR: Mathieu
MAH: I’m not an implementer but I do have concerns for them, I wouldn’t want to force accessors on engines that don’t want an accessor.
KM: Okay. Yeah, I think similarly, yeah. It’s not the end of the world to change it. But I think it would be preferred to not have it be. It’s sort of unclear—it doesn’t matter, in effect what happens under the hood it’s the DataProperty that acts as an accessor, so practically, it doesn’t matter that much. It slightly feels like a better API to not to be an accessor. Not a thing that can change over time. But I am not married to it per se.
KM: Yeah. So we are willing to change as well. Obviously. I don’t know. Someone from the V8 team? Also interested in hearing any feedback on the idea of marking the accessor approach as being, you know, not particularly nice or deprecated or something like that.
OFR: Yeah. Maybe I can also—that’s my next topic anyway. There is a consistency thing with the error arc caesar, like that we want to put on the prototype for normal Error objects. What I would find nice if—so why I don’t think it’s a terrible idea to have an accessor, it’s consistent with normal Error objects. So we use the name implementation for both of them.
JHD: I can speak to that from the perspective of the stack accessor proposal which is that the big difference is that it’s an owned property and not an inherited one. And so I think that is much higher, like,… on the list of people will notice, that’s much higher than whether it’s a DataProperty or an accessor. And also, this whole API, like, is legacy. This whole API is only proposed because of web compatibility. From my perspective, it should never have existed. It shouldn’t exist. But it’s because JSE and SpiderMonkey shipped it, it’s too late to get rid of it. And probably V8 shipped it for so long. I am not as worried about consistency because people shouldn’t use it. They use it in libraries to hacky, fancy things and that’s cool. A regular developer doesn’t use this function. Like a standard application developer, in my opinion. I am comfortable with it being inconsistent. As Keith mentioned internally, it’s the same implementation regardless. Whether it’s exposed as an accessor or DataProperty. It won’t matter which one it is in terms of implementation difficulty. Obviously that could be wrong. But… yeah.
RPR: Mathieu?
MAH: Yeah. I am all for consistency, which is why an accessor doesn’t shock me. However, I want to be clear that in TG3 we discussed potentially sharing the accessors themselves between Error.captureStackTrace and Error.prototype.stack and we believe that’s not a good idea but we don’t have to discuss that now.
RPR: Back to Jordan.
JHD: So yeah. Particularly, from V8’s perspective, is that okay that we can just mark steps 7. Legacy is the spec at that time we have for this thing is smelly. Don’t use it. So can we mark step 7 as legacy and that makes implicitly normative optional. And we can add a non-normative note that, you know, it gives context, perhaps. Is that okay? Does anyone have a problem with that? We can bikeshed the wording later.
DLM: It seems like a reasonable approach to me.
OFR: What does it mean exactly? Because it also doesn’t make a lot of sense to put that on. Well, maybe it does. We are not planning to change it.
JHD: There’s lots—
OFR: There’s an interaction with—so maybe this was not entirely clear from the presentation so far. But there is a prepared stack trace API? And that prepared stack trace API in V8 allows you to install your own JavaScript hook to format a stack trace. And that basically means when you call this accessor, it can call JavaScript code in V8 if you use this other unspecced API. And this other unspecced API is widely used that shows that this is used quite a lot, so this captureStacktrace is used on 1% of page loads and prepared stack trace is used on 0.1% of page loads. This is widely used. It would be like a multi-year project to get rid it of. So I don’t expect this to go away. As long as we have to prepareStackTrace API
JHD: That’s fine. There’s lots of things in the spec that are legacy that are never going away. Like dot subcentre and things like the prototype, accessor, stuff like that. It’s totally fine. The value there is a signal that developers should avoid using in new code if they can and new engines should add it. They have web compatibility reasons. But it’s still very useful I think to classify them as such. Even with the very explicit understanding it’s not going away in V8.
OFR: Okay. Yeah. I mean, I guess under this—yeah. Understanding of legacy, we don’t have a problem with it being marked this way.
DLM: Yeah. Okay. Great. Thank you. Is there anyone else in the queue
RPR: Justin
JRL: Another clarification. PrepareStackTrace only used in 0.1 on Web pages, but 100% on node app. The fact it’s an accessor allows performance because we don’t need to perform the actual prepare step until the user tries to access it. We can get the current stack frames that can be done internally in the engine, skipping all the user code and when they need it while calling Error stack, that is when it runs. This is for the entire node ecosystem.
RPR: Thank you, Justin.
DLM: Thank you, everyone for your feedback. I guess I would like to formally ask for Stage 2. With spec text that I showed. It needs to be fixed up a little bit and the legacy note that Jordan suggested
RPR: James has support for Stage 2. Mathieu?
MAH: Support Stage 2 as well. Especially, if we go for accessors, thanks for specifying the private slot route.
RPR: All right. I am hearing support. Also Dmitri and Chip has support.
RPR: Thanks everyone. Congratulations, you have Stage 2.
DLM: Thank you.
RPR: There was support from Jordan. All right.
[LATER]
JHD: We forget to do Stage 2 reviewers for captureStacktrace
RPR: Quickly, yeah. For captureStacktrace we are looking for Stage 3 or Stage 2.7 reviewers. One reviewer is Jordan. Thank you. Any other volunteers? And Michael Ficarra. Excellent. Thank you.
Speaker's Summary of Key Points
- Presented approach involving an implementation-defined choice of installing data property or using accessors. The accessor version will dynamically add an internal slot. The getter returns the value of this slot, the setter overrides it with a data property.
- Continued use of accessors is important for V8, in particular for Node.js
- Preference to only use the data property version in new implementations
Conclusion
- Support for Stage 2 with text about the accessor based approach being not recommended for new implementations.
- JHD, MF are Stage 2.7 reviewers
Intl Locale Info API for Stage 4
Presenter: Frank Yung-Fong Tang (FYT)
FYT: All right. Okay. Let me see.
FYT: Can you see my screen
RPR: Full screen. You are good.
FYT: This is proposed for Intl.Locale for Stage 4. We will go through the motivation history and we do have one request that blocking this consensus for normative PR, and then also, I want to tell you the current things if the—to have a second request for advancement. Basically this has been put on the TC39 for a long time. TA proposal to expose locale information like the week data. The firstDayOfWeek, in our cycle. We actually advanced to Stage 1 in 2020. And Stage 2 in 3 and 2021. And then we have a lot of updates. There’s some functioning changes, et cetera, for a couple years. And incremented. So currently, we have issue 75, sorry. 76, I think. And it is basically saying, we should define the fall back behavior, one of the—AO. So basically, the idea is that when we write a proposal, we kind of leave those falling back behavior, say, implementations dependent is not—is very vague language. And Mozilla and JS actually suggest we make it more clarified. How does that really fall back? So we have some issues and get a fix. Currently, we change a little bit of the PR because of some other thing and now we review that in PR 92 and this is the proposal by Mozilla and JS for the normative. And basically make it more explicit. It’s not changing behavior. Make some behavior that is not specified to be clearly specified. In TG2, 2025, October 9. So in—for this particular issue, there’s another reason that is blocking Mozilla, so I think what currently V8 and Safari ship it, but Mozilla is blocking this one. I want this address to be that—it can ship things. So only one issue is that Mozilla is not shipping it. So there are no other known issues on this. So we are currently requesting the committee consensus for the normative PR 92. And then I can show the PR. It’s a little bit big. But mainly the—can you see the PR here or?
RPR: You are showing the PR. There’s quite a lot of text and it’s a little bit small. But…
FYT: Right.
RPR: Shane is on the queue if you would like to go to that, FYT.
FYT: Yes, please
SFC: We discussed it at the TG2 meeting in October, and everyone was quite happy with this change.
FYT: Let me see. And we have—so this is a proposal by JS and I review it and I think Daniel Minor has reviewed that and we have—let me see. TG2. We have some discussion. And Apple is looking at this. I think—and several other people are looking at it. Yeah. Any other questions?
DLM: I am on the queue. I wanted to say we support the normative change and to thank you for working on Andre to get this resolved.
FYT: Anyone else?
RPR: Are there any objections to this normative change? No objections. Congratulations, FYT. You have consensus.
FYT: Okay. So once we have this consensus, because that’s really just specified what is the current behavior; I want you to report what implementation stages we are in now. Chrome and V8 have implemented the proposal in M99. There’s some changes made the changes. Safari and V18, 2023. September. Mozilla is pending on. And also, there’s a polyfill. So we do think with all—all the implementation there are two browser implemented thing and the Mozilla only blocking by the issue once this got merged we should be able to advance to Stage 4. So I think in the TG2, on October 9, we agree we need to first resolve this issue which we just resolved and also have the review—the Stage 4 PR. The Stage 4 PR definitely still failed to change a little bit based on the merge of whatever we approved. The other thing, we have a PR for PR there, a Stage 4 PR for that. And the reason is that for Stage 4 requirements, we need to have two compatible implementations. Chrome and Safari. We have experienced that and we have a PR ready, except we need to merge whatever we just approve. And the editor—I forgot the change. The editor looked at the PR, that I think they should have some approval. RGN I think looked at it. And there may still be some minor issue we need to address.
RGN: Yeah, my review turned up nothing severe.
FYT: mostly editorial stuff. Right? Yeah. So I would like to ask for a consensus for advance to Stage 4.
RPR: All right. Do we have any folk in support or objections to Stage 4? Chris has support.
FYT: Thank you.
RPR: Shane?
SFC: Yeah. I am really excited to see this come all the way through. I know FYT has stuck with the proposal for several years. It was supposed to be a small proposal and we found a lot of issues with it throughout—on the way and FYT has stuck with us to the end and resolving the issues and I think that what we end up with here is a really, really good Intl.Locale info API that develops developers built out applications including custom calendar Guis and other key use cases. And thank you, FYT, to sticking through it to the end
RPR: We also have support from James. Daniel Minor. And Christian.
FYT: Thank you.
RPR: I am not hearing—
RPR: Is consensus reached? Yes. Congratulations, FYT.
FYT: Thank you
RPR: You have Stage 4.
FYT: I appreciate it, everybody.
Speaker's Summary of Key Points
- Anba from Mozilla proposes PR92 and TG2 agrees. Need TG1 approval to merge.
- With the merging of PR92 there are no outstanding issue and like to request for Stage 4
Conclusion
- TG1 agree to merge PR92
- TG1 agree for Stage 4
Iterator Sequencing for Stage 4
Presenter: Michael Ficarra (MF)
MF: I will try to do it 2 minutes faster than I planned.
RPR: Thank you
MF: So this is iterator sequencing. Remember that it adds the iterator.concat function, which produces a new iterator. And that iterator yields all of the values that would have been yielded by all of the passed iterables.
MF: This is what the spec text looks like. It has review and approval from KG. We have implementations from JSC, who originally implemented it in October of last year. And it has been available behind a flag since November of last year. SpiderMonkey also implemented it, in November of last year. And it has been available in various forms behind a flag until just a few days ago, I think, actually. Where it was unflagged in nightly releases. No signals from V8 yet about their plans to implement.
MF: But that is two. I am looking for Stage 4.
RPR: Daniel Minor?
DLM: Yes. We support Stage 4 and yes, you are correct, this will ship in Firefox 147.
OFR: Yeah. It’s planned to be implemented. So all good.
RPR: All right. So this is the formal call for Stage 4. We have had—
MF: Yes
RPR: Two for support and Jordan. Are there any objections? I hear no objections. So, congratulations, Michael. You have Stage 4.
Speaker's Summary of Key Points
- Iterator Sequencing has had two implementations for approximately one year.
Conclusion
- Stage 4
Keep trailing zeros in Intl.NumberFormat and Intl.PluralRules update
Presenter: Eemeli Aro (EAO)
EAO: This is going to be a bit of a two-parter. First an explanation of a PR I’d like to merge fixing some small stuff, and then an issue and a possible solution for that if we want to opt into it. And the latter part is why this is 60 minutes, because this is about rounding and it’s about precision, and these topics seem to take up committee time often quite a bit. So I didn’t know if we were going to fill up the whole of it. But I guess we’re going to find out.
EAO: The first set of fixes here that I’d like to fix are related to the treatment of this slightly peculiar thing where we need to count string digits as a concept that’s entirely internal to Intl.NumberFormat and this proposal’s changes in how we deal with string values.
EAO: And the first part of that is that when we are parsing a string, we accept leading zeros, so something like 0012.3 is a numeric string that we support in Intl.NumberFormats.format and Intl.PluralRules.select, and there we need to trim those leading zeros so they don’t belong to the count of how many digits we consider that value to contain. And that—because we’re doing this inside a syntax directed operation, we need to do also the zero trimming as a syntax directed operation. It all turns out fine. It added a little bit of complexity to the spec language. I don’t think it impacts on engines much, it just needs to trim leading zeros and then deal with it as before.
EAO: The second part of leading zeros is sometimes we don’t require them at all. So a value like .45 needs to be handled as if it’s got a leading zero before it. And because we’re counting these string digits, rather than fraction digits or significant digits or integer digits, we end up needing to count three as the proper count for .45.
EAO: And then there’s the changes that we need to apply when we’re changing the base of the value that we’re formatting. So specifically, for example, when we’re formatting a value like 0.06 and we’re formatting that a percentage, we multiply the input by 100, so we should format that as 6%. Effectively, this can be thought of ensuring that when we’re moving from 0.06 to 6, we don’t add more precision than the six that we’ve got there, so we don’t end up with 6.00%, but it’s more like 006%, and then we do the same leading zero trimming thing that we presented earlier. One slightly tricky part of this is that because we in Intl.NumberFormat can do both style: ‘percent’ and notation: ‘engineering’ and/or notation: ‘scientific’, we might need to do this thing twice, so it’s a bit, not complicated, just need to be careful about doing it. I discovered these things when I was writing the tests for this. To see more examples of how stuff works with this, there’s a test262 PR link later in the slides, and it should be included in the agenda.
EAO: So with this being presented, given that this is 2.7, I understand that I ought to be asking whether it’s cool to merge this. JMN has given this an approval. I don’t actually know, do I technically need my Stage 2.7 reviewers to also approve every PR or how does it go? But my question here, before we get to the next more complicated thing is to ask is this cool? Does anyone have any questions? Is there anyone on the queue?
CDA: RGN is on the queue with an empty topic.
RGN: Yeah, I just jumped in real quickly to say that I haven’t looked over the pull request yet. I do intend to. But I am in favor of it in spirit. I think that fixing those edge cases came up pretty early on. I don’t think you need approval at this point to merge it because all the text is going to be subject to further review anyway. So if your preference is to push the button, I support that.
EAO: My preference is for you to say I press the button and then I press the button.
RGN: You might have to wait a little bit, then.
EAO: I’m not in a particular hurry about this. Please review this when you get to it and that will be fine for this.
RGN: Okay.
SFC: Yeah, I’ve also reviewed the pull request in spirit, and I basically am in the exact same situation as RGN, where this is a sufficiently, like, complex part of the specification that I want to make sure that, you know—that I review it thoroughly, but I’m definitely in favor of what it’s trying to do.
CDA: That’s it for the queue.
EAO: Cool. I think I’ve got enough on this that I need, and then to the thing that I don’t know how big or—how long we’re going to spend on it. Basically, what we ended up discovering relatively late about this proposal is that it’s impacting maybe a little bit more than what has been presented before in terms of what values it impacts. And this needs to be acknowledged and we need to figure out are we cool with working with it as it’s currently spec’d, which I think is the right thing to do, but for which we ought to make a committee decision.
EAO: So specifically, what I’m talking about here is that what we’ve been talking about so far as a change to the behavior is that when we have a NumberFormater with which we are formatting a string value that has trailing zeros like the first line example there, 2.00, that has a clear precision of three digits of precision effectively, that we retain as much of that in the output is requested by the specs and that this, then, yes differs from how the same value as a number or as a BigInt get treated. So this as before. But what has not been explicitly noted before this is that we also get this behavior, that when the number that we are formatting is rounded so that it starts with a precision that is greater than the precision to which we are formatting, so in the example here, we’re using maximum fraction digits 1, the default value for maximum fraction digits is 3, but this can vary up to 100, I think. Or you can use significant digits for the precision, then you opt into that.
EAO: In any case, if you’re formatting a numeric string value that contains more digits of precision than you can account for directly in the output, and, therefore, you need to do rounding, if the result of that rounding is such that it would have trailing zeros, those trailing zeros are retained. So specifically because 2.03 with maximum fraction digits 1 rounds to “2.0”, we would format it as “2,0” Rounding up works the same as well, and 1.96 rounds to 2.0 and therefore we format this as “2,0”. This is a slightly greater change to the behavior than previously presented. But, again, because of the very, very limited scope of what this actually affects, we still think that this is fine to do and it’s not going to break the web. But it is a little bit more than previously. And as a, therefore, slightly mitigating factor, we propose that we could do what was originally proposed in the discussion of issue #1 of the proposal, that we have this existing option in NumberFormat, trailingZeroDisplay, which is currently only used to trim the zeros if we end up with an integer value. We could add a new value ‘stripToMinimum’ to that option and using that, you could get the current behavior, whereas we are still proposing that going forward, the default behavior changes to keep trailing zeros, but that you could opt in with trailingZeroDisplay: ‘stripToMinimum’ to getting the behavior that currently happens, which means that you end up not retaining trailing zeros in these cases. So the question here is should we merge this PR and/or are there other concerns or questions about the behavior identified in issue #11?
NRO: Just a question. You said that we could do that. Is it just we can preserve the behavior or is there actually a need for preserve, for having a way to get the previous behavior? Like, is there some way to say, oh, I actually need the behave, please give me an option for it, or is it just that it’s possible for us to provide an option?
EAO: It is possible for us to provide an option. We still have not had anyone come up explicitly stating that they want or need the current behavior.
NRO: Okay, then I would prefer to not do the option unless somebody says we need this.
SFC: Yeah, I—yeah, I’m—initially when I first saw this, this behavior, I was—I found it a little bit surprising, like, a little bit greater of scope. But I know that I’ve spent more time investigating some of the cases here, I think that the proposed behavior is quite a bit better, because it’s—it is introduces a very odd amount of inconsistency with, for example, if you’re formatting the difference between 2.00 and 2.01, it doesn’t make a lot of sense that one of them retains the trailing zero and the other doesn’t contain the—retain the trailing zero, because that’s not the use case that’s intended here. So I think that it makes more sense to adopt the behavior that’s in the specification already that we just recently sort of realized when writing test cases and things that this is also in the specification.
SFC: So I like the specification as proposed, as we had approved it for Stage 2.7. And about the option, I tend to agree with Nicolo that, like, we have a very clear path for adding such an option. And I think that I would like to see a little bit more data on the demand for that.
JAD: Potentially a dumb question. On the previous slide, it totally made sense to me because if you called toString on 2.00, you get 2 back, so it—yeah, that adds up. On the next slide, it’s introducing differences now if you—if you take 2.03 and call toString on that, you’re going to get different value on the other end. It seems weird.
EAO: That is what happens, yes, it is maybe a little weird. But it’s maybe the least weird thing we can do around this. Because I think other behavior than that gets quite complicated honestly. For example, if we were instead of formatting—so, like, because it’s not simple to establish a rule when not to apply the rounding behavior and not—so, for instance, if we were instead of formatting the string 2.03, if we were formatting the string 2.0300, which clearly has trailing zeros, and, therefore, we would still end up with 2.0 as the formatted, or should we then also go to just 2, or it’s—yeah.
CDA: Stephen?
SHS: Yeah, I just wanted to point out that initially I thought it was little bit weird, but when I think about it further, converting between string and number is inherently either fabricating or losing that precision data, and I think it’s actually quite natural that this a different result.
CDA: WH.
WH: Rounding works differently between numeric and string modes. Let’s say you have a Number, and you want to print it with a maximum fraction digits of, let’s say, 2. In order to actually do it correctly, you must turn it into a string and then round it. If you try to round it directly from a Number, half the time you’ll get the wrong results when the first dropped digit is a 5.
EAO: Just clarifying that, Intl.NumberFormat does support explicitly selecting a rounding mode, defining the behavior of rounding for concerns around this, not just specific to the changes proposed in this proposal, but overall behavior for the formatting—for the rounding of values for formatting.
WH: Yes, and that is the crux of the problem. Let me give two Number values which you want to round to nearest breaking ties to even. Try the Number .755, round it to two digits and then try the Number .855 and round it to two digits and see what happens.
EAO: I’m sorry, could you clarify whether this is a weakness in the current implementation of Intl.NumberFormat that you’re pointing out or a something being introduced here?
WH: No, what I’m pointing out in order to do rounding correctly, you must convert doubles to strings and round those.
SFC: I’ll just go ahead and do my item, which is that the Intl.NumberFormat has worked on doing rounding on string decimals in string decimal space for a long time, and that was very much explicitly specified in Intl.NumberFormat fee 3 proposal.
WH: Okay.
SFC: It’s not the case—and it’s not the case in Intl.NumberFormat that 0.755 and 0.855 round differently. In fact, they will round according to the rounding mode, and we specified that in the Intl.NumberFormat V3 proposal. And the proposal that EAO has here is not changing that behavior. It’s orthogonal.
WH: So you do convert those to strings before you’re doing the rounding, is that right?
EAO: Yes.
WH: Okay.
CDA: That’s it for the queue.
EAO: I’m waiting for an uncomfortable silence to stretch a little longer to see if anything else pops up there. CHU?
CHU: So I think this does make sense. I would support it. But the thing is I’m—I wouldn’t say I would object to it, but I think keeping the option is—keeping a possibility to retain the old behavior is worthwhile, because in the end, what you’re proposing is breaking change, and this—you said it won’t be breaking the web, but it can have—I assume the usage of this, because it’s about formatting a string, so if you change that, from an engineer’s perspective, there might be layout changes, and I can see use cases where this is a fast fix—it would be to keep the old behavior. And therefore, we would need that option. I mean, Shane was referring to seeing any data for that, but I think it would be worthwhile keeping a possibility to retain the old behavior. As meaningless as it might be.
EAO: Just to clarify, by “keeping the option”, you mean merging PR #12 and adding the stripToMinimum option value to trailingZeroDisplay?
CHU: Yeah. So that—there’s the possibility to have the old behavior opt in, as opposed to not have the possibility.
CDA: Shane?
SFC: Yeah, that’s the position that CHU just expressed, I think is a perfectly valid position to hold x when we discussed this in TG2, the comments that we got were along the lines of, well, like, we either believe this is a web compatible change or we don’t I believe this is a web compatible change. And if we believe this is a web compatible change, then we don’t need an option to get the old behavior. And if we believe in is not a web compatible change, why are we proposing in change? We should instead make in an option itself or something like this, right? And I think EAO has presented a fairly compelling case that I agree with, that this is a web compatible change. And, like, if we want—like, if we go sign up for adding an extra option here, it’s more for, like—we have to be clear that we’re not adding it because of any question that it’s motivated. We’re adding it because we want to, in the case that, like, the layout changes on a website like Christian said have a quick fix, they have a quick fix available. And if we think that’s high enough probability, which given the scale of the web, maybe is a high enough probability, to justify adding in option that otherwise may or may not be motivated by itself.
SFC: I do think that the option is somewhat motivated. I don’t know if it’s—it wouldn’t—I don’t know if it’s strongly motivated, but the one thing that you would—that this option allows you to do that you can’t currently do is passing in a decimal string that has more than 15 significant digits. For example, if you want to format a string with 20 or 30 significant digits in the Tess mall string and you want to get the shortest representation and you don’t want to trailing zeros, you would need this option in order to get that. It’s a little bit of an edge behavior, but maybe if you’re building, for example, an online calculator and you want to show the shortest version of the value at all times, that would be a use case for this option.
SFC: So I do think the option could be motivated, but I think that the main reason we would be adding it if we did was basically as this insurance policy. But we still need to be clear as a committee that we believe is a web compatible change.
CDA: WH.
WH: I agree with Christian here. It came up just last week in the development I’m doing. I have a use case for doing exactly what the current behavior does and what this option would let me do—there are situations where I want to display numbers without extraneous trailing zeroes, but I don’t want to round them too much.
EAO: Cool. So I think that is leading to the question of do we have agreement in principle that we ought to merge PR #12, which adds this option value? The change to the spec, if you care to look at it, that this adds is really very minimal. Most of the change for this PR is adding the framework for that option being able to take a new value, but it’s only used to set the string digit count to zero, rather than the value that we generate by actually counting the digits from the string. So, yeah. Anyone opposed to merging this PR?
CDA: Shane?
SFC: Yeah, I think the feedback that I think is most important today from the plenary is whether we move forward with the principle that we want to have an option to get the old behavior back. And the exact nature of that, it could be an option with this name. We’ve—I think we briefly discussed this name of the TG2, and the name of this option has not really been thoroughly investigated. So I would expect that, like, we—we approve in principle, but then when we come back for Stage 3 at a future meeting, then we come back with, like, the finalized name that TG2 has agreed on for the option.
CDA: I’m curious, strip over, like, trim or something? Like, what else was—
SFC: So the name—there’s another option in the same enum called if strip integer, and That's where we got the name strip from. Trim may have been another name that could have been chosen for that.
EAO: Just checking, if nobody opposes adding this option value, could we first add the option value and then if bikeshedding results in a different name, then we go with that rather than blocking this on the bikeshedding.
SFC: That’s what I would intend—I had intended to suggest, yes.
EAO: Cool. So what I’m taking out of this is that I’ve not heard any opposition to merging PR #12. And we’ll do so once RGN and/or SFC give it a review or somebody else.
SFC: PR #10 is the one waiting for SFC and RGN review.
EAO: Somebody should look at this one too, you know?
SFC: Yeah, also PR #12.
EAO: Yeah, yeah. So, yeah, this proposal is currently at Stage 2.7. Not asking for Stage 3 because obviously we’ve got still a couple of changes coming to it. But expecting Stage 3 request to be coming quite soon, possibly in January. As we have a new TG2 process of getting W3C internationalization working groups proposal like this, this is the example case that’s gone through that process and has the review and approval from there. There’s also the A test262 PR that’s going to need some iteration because it doesn’t include tests for PR #12 changes yet. That’s going to need to be updated. And there’s a polyfill. I forked the format JS library, and it’s implementation of NumberFormat in order to assert how the proposed spec changes actually work in practice. Have a look at them if you’re interested, or don’t if you’re not. That’s it for me. Happy to continue on this maybe in a month or two. Well, two months, I think.
CDA: Good. Thank you, EAO.
Speaker's Summary of Key Points
- PR #10 was presented, and got support in principle for merging (pending review).
- Issue #11 was presented and discussed. The proposal's current behaviour was accepted as reasonable.
- PR #12 was presented, and got support in principle for merging (pending review).
Joint Iteration for Stage 3
Presenter: Michael Ficarra (MF)
MF: Okay, so I don’t know, some number of months back, KG and I put together a testing plan for the joint iteration proposal with the hopes that someone would just write the tests for us. And someone did. So ABL from Mozilla followed our plan and wrote the tests, including some additional tests that were not included in the testing plan. And feedback from test262 maintainers was that they wanted a couple changes, which KG did take up that pull request in this follow-up one, adding on a commit there. And then I reviewed the entire thing. There were a lot of tests to go through. I made various changes, and I approved it, so it looked good for me. We also—KG wrote a spec compliant polyfill, which we ran against it, and they all pass.
MF: I want to have a quick aside here that the entrance criterion for Stage 3 is just that the feature has sufficient testing and appropriate pre-implementation experience. Given what I just said, I think that it meets that criterion.
MF: So—but in addition, you know, the pull request is not yet merged and we have heard from test262 maintainers that they are okay with merging it in its current state. That was, like, some time last week, I’m not sure when that’s going to happen, but, again, I don’t think that’s necessary for Stage 3. At some point, we should have a longer conversation about that, but not today. So for joint iteration, I would like Stage 3.
NRO: Yeah, I would encourage also the other test262 maintainers to please merge pull request, even if you might want to keep reviewing those tests. Just because if the proposal goes to Stage 3 now, browsers are going to start implementing it hopefully, and we’ve seen it happening multiple times that there is a pull request with tests, but it just doesn’t run in browser’s CI, the example with request and so they don’t catch back. And I’m personally fine with saying there is sufficient testing. It’s been reviewed by people who are competent about this, but still I would like somebody to press the merge button. It doesn’t need to happen before Stage 3, but it needs to happen very soon before browsers start implementing this.
DLM: I fully support Stage 3. We’ve had it in implementation for a while, and I’m looking forward to being able to ship that.
RGN: I agree with MF and NRO. There’s a little bit more to do, but it’s basically just paperwork and rubber stamping on the test262 side. No substantial changes are expected from this point, and it looks good for Stage 3.
JHD: I’m happy with Stage 3, but for the reasons mentioned, I do think it’s very important that the test PR be merged, like, either before advancement or as close to after as possible, but I also think that there’s no need to block advancement on it as long as we’re planning to merge it shortly.
CDA: All right, that’s it for the queue.
MF: Okay, then I’d like to formally call for advancement to Stage 3 for joint iteration.
CDA: You’ve already got a few voices of support for Stage 3. Are there any more? I saw thumbs up from CM. Do we have any objections or dissenting views? Hearing nothing, seeing nothing, you have Stage 3.
MF: Thank you. We are going very fast through this agenda.
Speaker's Summary of Key Points
- Tests have been written and reviewed.
- A spec-compliant polyfill passes the tests.
Conclusion
- Stage 3.
export defer for Stage 2.7
Presenter: Nicolò Ribaudo (NRO)
NRO: Okay. So the export for Stage 2.7 or export defer. Very quick, what the proposal I think I presented this in the last few plenaries, so I only included this slide. It’s basically you have some levering on the right that exports a bunch of stuff, and then you have your application left that imports one of those things from that library, and this example is only loading the code needed to run button and not loading, for example, the importing from input and check box, causes loading and executing. And when you pair these with, like, namespace imports for import defer, you get the, like, execute and access behavior. Motivation, my apologies for those that already looked at the slides before one hour ago. I just recently added this slide, with you it’s exactly a copy from the slides I already presented, that it’s common in the JavaScript ecosystem to produce provide libraries that export a bunch of stuff. And the reason for this is that it’s just better DX for users of that library to import a bunch of things in one place rather than one function from a module and another function from somewhere else. And the way to support it is by—to have internal listing multiple files and adjust export from that. These examples were from, like lodash would do it. It’s an extreme case and it would have 100 export statements. What happens more commonly is that you have some export from statements, I don’t know, maybe 10. And maybe you have some main code in your fail and it’s exporting some additional utilities that are defined in some other file. And this example here is a file that’s a bunch of export from statements and, again, that’s the extreme case.
NRO: And this is considered to be a bad practice. Even though it’s common because it’s problems, and so people in the community tell other people, please do not do this. The reasons are that you’re just loading and executing code that you do not actually need. But, again, the act of having just single export frequently wins over these concerns, and then we need tools that go around and strip code as needed and different tools have different ways of doing that. They might strip different things.
NRO: So the goal of this proposal is to allow developers to keep choosing that pattern that they’re already choosing, but without that causing a problem for the end users, because having code from a website because my website is going to take longer to load or starting interactive.
NRO: Also, this is new based on discussion over the weekend. I apologize for that. Somebody asked how do the import and export you’re proposing, how do you know which one I should use? So they—the proposals have some differences and are similar in some aspects. If you remember, we had maybe six months to one year ago a discussion on whether we wanted to use the same keyword for the two proposals. And we wanted to do two key words for this exact reason. And import defer lets you—you preload a whole subgraph from a module, and then it lets you defer its execution. Export defer when used in a similar way gives you granular control over that, so you can say it doesn’t execute this whole thing, and I just need this part. And when you’re not importing a whole module, and just some specific function from that module, it can statically tell, oh, they’re never going to execute this because they don’t have access to execute a, so I can just skip loading that whole subpart.
NRO: And so we’re trying to use in? And you should use export defer one day, and export something that the consumer might not need. So that whether that thing exported by the library is actually loaded and executed, they’re basically deferring the responsibility of triggering that to the library consumer. The—I would say the only time libraries should use export from without defer is when either it doesn’t make sense to use library without importing that specific pining or they have a set of facts that they’re—binding or never a set of facts they’re relying on. And consumers of libraries should use import defer when they’re loading the whole library and see, actually, the execution of this piece of code is low. It’s like a heavy library, so even if I’ve loaded it, like, JavaScript is running and it takes multiple seconds before those are happening, and I do not need those things to happen at start-up, and you can use import defer there to delay that execution.
NRO: It’s not something that you should just blindly use. You should use it once you have a performance problem due to that library. Since last plenaries, we had a couple discussions. We had—there was an open question about implicit propagation of defer. So in this example here, we have at the bottom the internal module that does export defer and foo and bar, and then line exports foo and bar. And, yes, 2015 way, and so we add the defer can keyword, and entry point import only foo. And the question here is library is importing and exporting bar without using defer, and then foo is not really using it, so can we, like, have the defer marker bubble up and not have library first the execution of the Moose Jaw a particular contains bar. And we tried doing that, and it’s a lot of complexity, and it noons when the user does import foo from library, they cannot just open library to see what they’re actually loading is and it will trail down into all of its importing until they see whether or not there’s a defer keyword or not. So we decided to not do this, and instead if library wants to propagate the defer redness of these things, then it’s explicit every foo bar. As consequence of that, we had another idea. Maybe we should just make it a never to use export from for something that was deferred. Because ideally you would want to put defer there to, like, propagate the optionality of the import. The—so the advantage of this is that you don’t accidentally load something where you—maybe you were not expecting to be loaded. The disadvantage of this is that adopting defer becomes a breaking change for libraries because now, well, your consumer is not using defer, that stops working. I have a slight preference for not doing this, for leaving it to lenders, but I did—I’m very much curious here to see how the committee feels about this. And then, well, this was do we have cob consensus of 2.7, if we don’t make changes, and otherwise I have to make changes, and I have 2 of 3 of the Stage 2.7 reviewers in the past few pay days. There’s a few changes, and there’s one form where we’re explaining step by step how it’s working, and the suggested changes are all editorial. Let’s go to the queue. I see Jake.
JAD: Yeah, how does this work with dynamic import? Like, how does it know what to—what to load and execute for the target module?
NRO: Yeah, so dynamic import behaves the same as namespace imports, like, if you do imports there’s a, it gives you the whole namespace, which means it actually loads everything, because from there you are able to access everything. There was—there was actually an idea that was first discussed—well, I don’t know fit was discussed. I had the import defer time when we decided to import defer only works in namespaces, it was to do, like, something like this (import { foo, bar } as ns)), where you get a namespace, but specify what you want to have access to. But before suggesting anything like this, I would actually like to see how people use the proposals in practice. So it’s possible to retrofit something here that makes you say I’m getting whole namespace, but please to not load evening. And I think we need to wait for that to see how things go.
JAD: And my next question was on the web we’ve got linker, module and preload, and what would that do, and I imagine it’s let’s see what develops for dynamic import and copy the same thing.
NRO: And maybe to link pre-model load, you might want to specify a list of identifiers to load. And know it will import foo in bar with the static import. I still need to—thanks for bringing this up. I was expecting here to have, like, no consequences on HTML, but this is a very good point.
GB: Yeah, I just wanted to say, you know, overall it has my approval. I read through the proposal. The semantics are very well defined and clear. But as I was going through it again I did have some late stage reservations, which doesn’t affect my Stage 2.7 approval. But does affect, I think, a Stage 3 approval, and I want to go through some of those now, and there’s a few. There’s like three or four different things. We’ll see how it goes for time. Please feel free to interrupt me. So I guess the first thing is that the module system is already very complex. And import defer adds new complexity, and then this adds extra complexity. The other thing is that this is actually kind of two separate features. These partial namespaces is a little bit of a distinct feature from these actual explicit named imports, because the way that the names are collected on the namespace and partially execute is another kind of mechanic. And from a purely mechanic point of view, that’s where the confusion for end users is something that I brought up, and I think that is something you touched on in your slides. But we’re telling users that we want them to use import defer right now, so that whenever you import something, you should use the import defer namespace for—to avoid unnecessary execution until you need features. With this, if you do that, you lose out on all the network benefits of this proposal where it can actually avoid load things over the network, because if you import defer one of these namespaces, you still—you actually have to the network work up front, so import defer, for an expert defer, is effectively an expert defer anti pattern, and that’s a little bit confusing in terms of how we communicate that. And that’s the one discussion that I think is important to have.
GB: And the other discussion is about whether we really want to be encouraging the barrel file pattern. And NRO, you mentioned that all re-exports should be ideally using this syntax. But I think, you know, re-exports are a really useful thing at development time for constructing modules and then you build out your chunks and you have reexports from chunks where you probably have a directly export to the chunk, and I think in both of those scenarios, you don’t necessarily need export defer. So is the only use case barrel files, or are there other use cases here as well with these partial namespaces where he apply to things that aren’t quite barrel files but maybe are slightly barrel files? I think I pretty much got through it, but thanks for listening to me on all this.
NRO: So, yeah, about import defer and export defer working together in, like, maybe it is true that if you are getting the loading benefits from export defer and then you move to namespace import, you’re losing on that. Unfortunately, the guidance for whether it’s worth losing the loading benefit to get the execute benefit is going to vary a lot on the environment you’re running. For example in, the browser, not loading one file, it’s probably going to be much better than loading a useless file and then skipping some code execution. But then there are environments where the file loading is much cheaper, maybe, for example, they’re already preloaded in memory, like, pre-cached and compiled by code. And so the loading cost is basically nothing.
NRO: This proposal actually—this export defer thing was originally part of the import defer proposal and it was just in the execution thing. And, like, when we split out, it’s because we realized actually, we can do even better. We can do loader. Give loading benefits. And, like, going pack to not doing the loading means that the feature work are more complete with each other. And saying that import defer works exactly like export defer, it’s the execution of something, but we’re missing out on, like, some potential there. Like, personally, even if it started like this, I find this keep loading part much more exciting than the defer excuse part here.
NRO: About the barrel file thing, so, yeah, the example I had here was just a bunch of free exports. I’m thinking of cases like, I don’t know, for example, I like react where you have the core react and it has a bunch of hooks, and normally users might need some of those hooks. So it’s not just something like lodash that exports something, it has core logic and a bunch of satellites that are around that. And, like, those kind of libraries would also benefit from saying, oh, all of these additional features are actually skippable.
NRO: Sorry, you can tell me, I don’t remember what the other points were.
CDA: Well, JHD has got a reply in the queue.
JHD: Yeah, I just wanted to talk for a minute about barrel files. So this is my opinion informed by many years of experience. If you want to argue with me, please feel free to do so outside of plenary time. Barrel files are at best an attractive nuance and most commonly actively harmful. In every cone bit code base I have experienced, if you convert all your barrel files into deep imports where you only pull in the things you need instead of trying to use tree shaking to do a half as attempt at cleaning up the stuff you brought in that you don’t need, your memory usage, your performance, all of these things will improve dramatically. When I was at Coinbase, the react native app got 71% smaller when we made code conversion overnight. And we definitely shouldn’t be doing anything to encourage barrel files or make them more ergonomic. We should be, you know—that said, Lots of people do use them, so making them more performing is a great thing. And I don’t object to the proposal. I I just wanted to share the perspective.
CDA: You can quickly define "barrel files".
JHD: Yeah, that’s a community manifested term that loosely refers to the unfortunately common pattern of having a bunch of small things in a library, and oh, yeah, just like this. And then collecting them all together so you have a single entry point. This predates the exports field on node packages. Which allows you to limit your entry points without needing a packet, something like this or like roll-up. So nowadays, there’s just no need for this. You can just tell people to import from slash add, slash divide, et cetera, and you can make sure they can’t get it all over your other internal files.
NRO: So it, just to respond to this, is it true that—is not true that the reason barrel files are bad because you need one thing to import all of them?
JHD: Yes, and that is why this proposal may help. That however, there’s a lot of code that would have to be updated in order to take advantage of those limitations, and it would be bet for all of the—like, rather than trying to hope your files are this clean, because Many barrel files aren’t just re-exports, they also have some other stuff in there. But in general, like, from a philosophy point of view, I would say pulling many just the things you need is always a cleaner approach than doing fancy stuff to optimize away the stuff you don’t need. And I think that as an ecosystem, we should be moving as fast as we can away from that pattern that’s on the screen, regardless of what performance improvement we still want to provide to help that pattern.
NRO: Thank you. And and this is aiming to do the thing, well, don’t just import exactly what you need. It’s probably just doing it a different way than the way you would like. You find it beautiful.
CDA: Just noting we have about ten minutes left for this topic, and five items on the queue. RBR?
RBR: About the argument that barrel files should be prevented from being used by making them non-ergonomic, I don’t believe that’s a actually a real argument, because people don’t know about this pretty much. They like it, and no matter what, it will be used. In this case, it’s more about, like, we have to face the reality of how they are used. I don’t believe slow or fast will change that at all.
JHD: Just to clarify, I’m more focused on the messaging than on a deluded belief that we can actually change user behavior by making their favorite thing suck. If that were true, like, a lot of things on the web would be better.
CDA: All right. ACE?
ACE: I think—I was on the notes, so maybe I misheard GB, but I think GB said that the advice is that all imports should be import defer, even if that’s not what you said, I’ll still say what I said or I’m going to say. So we have import defer already inside Bloomberg using a horrific in line comment instead of the syntax. The—our advice is not the add import defer everywhere. There are easy places where this is hopefully offence to the author, like, if you’re immediately going to use that thing right under the import, definitely don’t defer it because you’re immediately going to trigger the execution, so you’re only adding overhead in that case. Our advice is, like, if you are trying to really defer something, maybe dynamic import is, like, the best thing to use. If the thing is something you’re waiting until a user clicks a button and it’s okay that there might be a delay to do the full thing, if you can go full lazy and that works for the case, go full lazy. If it’s something you’re going to use immediately, use full import and then start considering import defer, and definitely measure—we’re very fortunate we have a lot of very good tooling inside Bloomberg to measure, like, kind of what’s happening during application load and time the first paint and highlighting which modules—I think similar things in browsers with you spend a lot of time evaluating this module, and then you didn’t execute half of it. So I think, yeah, the advice would be slap this on everything like most performance things. And it’s measure first and add it because it’s—you’re seeing that this would be a win.
GB: Thanks for explaining that so clearly. I just wanted to briefly summarize. I think it was a few things and I kind of rushed through them a bit, and maybe it helps to just go over those points again, because to be clear, while I say this isn’t blocking Stage 2.7, this is a very serious consideration for Stage 3, and I’m very seriously considering blocking Stage 3 on it. And so to very clearly explain what the individual issues are. So firstly, you know, it’s—it provides ergonomics for barrel files, and if that is the use case that we are pushing, we just need to be very, very clear that we are all in agreement that barrel files are a good thing.
GB: The current message in the he can—like the refactor JHD mentioned he had a Coinbase in general we’re sort of moving away and if we’re now going to be providing optimizations for the ecosystem that are going to it create best practices, that are going to attract people back to these patterns, we need to be very, very clear about this. About this ecosystem, in fact. Sorry, JHD, did you want to speak?
JHD: Yeah. Just so we’ve established in the past that the only thing that is a valid Stage 3 blocker is something that came from implementation experience essentially. And so all the concerns you have described, I think, if you want to use them to object to advancement, you have to do it now. And not after 2.7. Just to be clear.
GB: Okay.
NRO: I think the motivation is to agree now that these goals are something we will do. I don’t think we should go to 2.7 and say we don’t agree with the goal. There is something that some people said here, GB, that maybe you didn’t fully absorb. It’s not necessary necessarily, but whether we think the barrel files are good or bad, it’s that barrel files have been used and there are clearly a lot of people that like them, even though there is some other group of people that is trying to say not introduce them because then you want to use them with everything. It’s not necessarily about expressing whether everybody should use barrel files; it’s recognizing that there are widely used patterns.
GB: I think it’s really important we go into that with our eyes open up. And the second point, with that new sort of endorsed barrel file work flow, using import defer becomes a footgun because if you import defer something that is using this barely file technique, you showed the example of lodash, but lots of folks here will remember lodash has hundreds of these. When you import the barrel files, there’s lots of questions.
NRO: Should we have a presentation about that in context? That sounds like if we do import defer and not these, like import—sorry. That sounds like a problem of import defer. Export defer
GB: Problem with barrel files and if they are something we will endorse as a best practice in the ecosystem we need to think about that from a use-case and usability in the ecosystem perspective.
NRO: Okay.
GB: My apologies for making the late feedback on that. Let me know if you want to propose Stage 2.7 or if we can maybe continue this discussion.
NRO: It sounds like… at least Jordan said we should not go to 2.7
GB: All I said based on what GB is saying, if you want the absence to retain the blockability, you have to do it before 2.7. I don’t think we should block the propose, but it’s important that nobody is encouraged to have more use arrange of barely files
NRO: Let’s go to the queue and I want to come back to this. To the consensus question. Do we think this is actually asking the committee to have consensus?
CDA: We only have a couple of minutes. We don’t have time to go through the queue
NRO: SHS says standardize barrel file. And ACE can you be quick
ACE: Just one—Barrels are the extreme case of only exporting something from like that people export things from other modules that have a bunch of stuff on. So please—like, if this is not all about barrel files. We can come back and present other cases and patterns where people are exporting things from a module that isn’t a barrel file.
NRO: Okay. Can I have like five minutes? No?
CDA: No. We have a packed agenda. We could try to schedule a continuation.
NRO: Let’s fit this in five minutes. Before consensus there was this topic and I see there's things in the queue for it. But… I cannot do it in 30 seconds. And like I have another module discussion and willing to take that off the agenda if we—if it helps fitting this in. But I think we are done for now.
NRO: Okay. Going to this question, I see Guy in the queue. Would you mind, do you want to—
GCL: Yeah. Hi. I agree with the champions here. I think if you are introducing this sort of refactoring hazard here, it kind of makes it impossible as a consumer to, like, structure anything. And as a library author, you can never migrate to using these. So I think, yeah, you have to leave this to the linters.
NRO: Does… okay. Thank you. Okay. Okay. So without doing the change of requiring of linters do we have consensus 2.7 as a committee knowing this means that we are agreeing with the—with the goal of making it better to have files, have a bunch of free exports?
GB: Yeah. I’m sorry. Nicolo to do this, at the last minute and it wasn’t my plan, but as I was going through the review, it was a serious concern. So yeah. I do have to raise that concern at this stage
NRO: We will get back to this at another time, then.
Speaker's Summary of Key Points
- the proposal was presented for stage 2.7, with no changes since last time.
- we discussed at length about whether files with multiple re-exports are good or not, whether this proposal risks encouraging them, and whether it removes the bad parts from it. The committee didn't have a shared value judgement.
- we discussed how "export defer" doesn't play well with other features that require you to use namespace imports ("import defer" and dynamic import), since the tree-shaking logic doesn't apply in those cases.
- the champion presented one remaining open question: whether it should be allowed or not to have non-deferred re-exports of deferred re-exports. There was one some "no" answer to it.
Conclusion
- We didn't reach consensus on advancement to 2.7
Error.prototype.stack accessor for stage 2.7
Presenter: Jordan Harband (JHD)
- proposal
- No slides presented
JHD: Okay. So I initially put first Stage 2.7 because I am overly optimistic to think what I could get done in the next couple of weeks. You won’t ask for that today. What I want to do, though, is leave the meeting with direction clear, so as long as I get the spec reviewer sign offer and the HTML integration PR approved there are no decisions to be made.
JHD: As a reminder, this is the current spec. It just, you know, is pretty basic. It throws if the receiver is not an object. Returns undefined if it’s a true error. And otherwise, hand waviness gives me a stack string. And then the setter which is behavior shared with one or both of capture and prepareStackTrace. I forgot the details. The throw in the receiver is not an object. We will talk about this in a second. And then it installs a own DataProperty on the receiver, which of course will fail if it’s non-extensible for the property is non-configurable and returns undefined as setters do. So that is the spec text So this part, I intentionally put this in here because with normal usage a setter only gets precisely one argument. And if—we have stuff like this. That is only possible to hit if you borrow the method in dot call it on something in the same usage pattern. I want to do it here and add another defensive check that says; make sure you use this right. Some for degree of right.
JHD: MF has a PR up that basically takes away that check. And uses is present which means that it implicitly assumes if you have 0 arguments that the first argument is undefined. Which is certainly how JavaScript function would operate. You can use arguments.link most don’t. It’s not worth derailing in another direction. The existing setters we have, MF enumerated them here, the—we don’t have—add for web compatibility reasons and I don’t think we should look for precedent. And the object appropriate value of setter is it one of the legacy web only things from before. Not a huge deal but it seems nice to me to have things throw errors more often when people are doing the wrong thing. So please put your thoughts on the queues. Before we—let’s dip into that before I go to the next item.
MF: You said that the present check is like defaulting to undefined. That isn’t what this is doing here. This is—it is still checking that arguments.length is one, but no-oping instead of throwing. That’s the proposed change.
JHD: Thank you for clarifying that.
MF: It skips that step’s action.
JHD: Right. So this—so I hear you. But that’s what it’s doing. Thank you for the correction. The—without the two steps here, it would just write undefined. I don’t see any reason why a no op is better here and that’s the phrasing before. You should get an exception because it’s exceptional
MF: In a vacuum, I would agree. As I put it in the conversation there, we only have three built-in setters, and they all are consistently behaving in this way and I just would follow that pattern because it’s not an important corner to catch.
JHD: I agree. I would love to hear thoughts from anyone else
MM: I put myself on the queue. So a slight from accessors, for which there’s, you know, which you covered, just among standard spec standard functions, is there any precedent for a function that errors on too few arguments and does not argue if the arguments are provided, but the value is undefined?
JHD: The only—offer the top of my head, the only function I know that counts its arguments is maybe object array and the date constructor. And the date constructor—all three do it over overloads. I think that’s all of them. No, there’s no precedent for it.
MM: Okay.
JHD: I am not adding a check here to if you pass more than one argument, you should throw because that is definitely not something functions do
MM: I agree with the hair you’re trying to shed there. KG did this a whole bunch of design roles for future JavaScript proposals going forward that don’t have to follow existing precedents such that we end up gradually moving into a better language without breaking compatibility. Is there anything in Kevin’s set of recommendations that covers this argument counting issue?
JHD: No. I think it’s just—it’s spiritually similar in the sense of Kevin’s premise was let’s—even though we did something gross before, let’s do something better moving forward, whenever we can. And I think that—on a much tinier degree, that’s this change I am making here
MM: Okay. I am going to—I am on the fence on this change. I really like having bugs be errors, but I am on the fence. And I am done.
JHD: I am not sure if the child agreed or not, but RBR?
RBR: So looking at what we have, as a spec, things are already inconsistent in many, many spots and we will never be able to always keep everything consistent. It’s my belief at least and what is a sentiment I hear is because we didn’t do best before, we cannot do better today about providing users feedback when you make a mistake. That is something I have seen and like well, I worked as a consultant in many, many bases where people were messing up because they weren’t given any feedback about them using an API in a way that is wrong. This happens in not only the language, but also in libraries and so on. But like at least from a language perspective, I would really hope that in the future we can provide a user with the best feedback about something that is not as they anticipated.
MM: Yeah. Other things being equal, I completely agree with you. It’s just the least surprise, if there’s no precedent then we introduce a first-case. But like I said, I am on the fence.
JHD: Yeah. In terms of—I would suggest considering it that the surprising thing is how rarely JavaScript errors occur when you do something bonkers. And any time we can change that ratio, I think it’s a good thing.
KG: Yes. You asked if there's a convention that covers this? There’s not exactly a convention that covers this. But we do have a convention that says, when required arguments are missing, we should throw.
MM: Oh. That seems like it covers this
JHD: Yes.
KG: So the intention was that it should cover functions. And this is not really a function. I mean, it is like formally speaking a function because it’s possible to rip it off of the accessor with getOwn PropertyDescriptor. But certainly when I was writing the convention I didn’t consider that case
MM: Okay. In that case, I am completely in favour. I think if it’s a recognized design role that in general we want to move to errors and to too few arguments, there’s no reason not to do it for the setter as just the first example.
JHD: Notably this is only in the usage pattern where this kicks in because the syntax for setters are always called within a single argument, even if it’s undefined.
KG: Yeah. And I guess since I am now engaging in this conversation, I do actually want to express my agreement with MF. I do think that as a language, it is great to move into a direction of providing information to users. Which is why we have the convention. But this is adding complexity to engines that I don’t think has any benefit because no one is actually going to do this. Like, no one is ever, ever going to—
JHD: I—
KG: No one not named JHD is going to do this.
JHD: Fair.
KG: I don’t think we should be adding this additional complexity to provide more useful feedback in a case which is that obscure. Better errors in general, good. Additional complexity for cases that aren't going to come up in practice, not good.
JHD: Then I guess my response as to that, why then do we check the receiver is an object? Everywhere, instead of no oping? We throw. You would have to branch.
KG: My preference is to treat this as if it was called with undefined actually. Not to no op. But…
DLM: I am just following up on KG’s point. I collected telemetry in the summer and we saw 15,000 calls on the setter on 55 billion documents. No one is using this anyways. That being said, I have a preference for what Michael is suggesting. We are treating this as basically something that is new, whereas error stack has existed for a long time. I thought about possibly tightening things up to be modern. For things like that, existed for a long time, it’s safer not to do that.
JHD: And just to add one of the possibilities, based on that telemetry. Maybe we don’t need a setter at all. But the non-zero risk of breakage didn’t seem worth it so I kept in the proposal
JRL: Okay. There are two points I need to make quickly following up on Daniel’s point. This is a really popular node, particularly because it allows you to implement captureStacktrace on older node versions. That’s what it was commonly used for. It is not used on a bunch of Web pages unless they are test pages. So I don’t expect it to be common in usage. But I don’t know why we are even—I don’t think this is useful. This error message. It’s not going to be a common case we are not preventing a user from doing accidentally. They have ripped off an accessor and invoked it. We don’t need to add any error checking or presence checking here. If you do this one case, we toString and it runs and that’s it
JHD: We don’t toString it. KG's coercion says we wouldn’t do that. The next question, should we make the setter throw for non-strings? You know,finish the position, you would say, no, stamp whatever value in the slot
JRL: This isn’t a common case
JHD: It’s certainly not common
JRL: Yeah
JHD: As I wrote the spec text for it, I am going to leave it cleaner than I found it. As long as we agree on a path, that’s what I will do. I am fine with that
JRL: My preference is not to do anything here.
JHD: So the—the preference Justin describes and maybe someone else was, just delete the check—the two steps here? Which means that not providing an argument is just setting undefined into the value.
JRL: Exactly.
JHD: And the no op is the PR Michael has up and as it would be the throw.
JRL: Thank you.
RGN: I think JRL and KG have made every point that I was going to, so I will keep this brief. I am in favour of a simple algorithm. Omit the steps entirely. Let an absent argument be the same as a present undefined. There is no need to go out of your way to provide feedback for users who have gone out of their way to use an API in an unusual manner.
JHD: Okay. So before I, like, summarize and suggest a conclusion, I am going to jump over to this other issue for mark, should the setter three for non-strings? Is there—if you have expressed an opinion that you want me to make a change to this spec text, I am going to assume that it should not throw for non-strings. Is there anyone who didn’t express that position that has a strong opinion one way or the other? Or anyone who expressed that position and differs from my assumptions?
RGN: I am not sure what you are asking, but I will provide an answer that I think is responsive: I am in favour of throwing on any non-string input if that is compatible.
JHD: Given the low usage from Dan’s telemetry, it’s compatible to do almost whatever we want here.
MF: So this is a place where I don’t think we have to follow any kind of precedent. And our normative conventions suggest we should throw on non-strings here. We are expecting it to be a string. It’s not useful to put something that’s not a string, especially confusing to people who get it and expecting it to be a string. We should throw for non-string.
JHD: At the same time we are then throwing for zero arguments because undefined is not a string. Just making that clear.
RGN: I am explicitly in favour of that.
JHD: Since the queue is empty then, I am going to move forward unless someone jumps on and objects with the assumption that I will replace these two highlighted steps with if—you know, if V is not a String, throw a TypeError exception effectively. The vys and the HTML PR, which I don’t think the issue discusses like a pretty straightforward plan, Kevin very enumerated the things I need to fix, but I don’t expect any controversy over those changes as long as they are complete. Assuming I get that work completed, then I will come back seeking 2.7 with those changes. And fingers crossed hopefully anticipate no friction. So thank you.
Speaker's Summary of Key Points
- Not seeking advancement yet
- Plan to return in a future meeting once spec reviews and HTML PR are ready
- Achieved clarity/direction from plenary about setter behavior
Conclusion
- setter throws for non-strings
Declarations in Conditionals for Stage 2
Presenter: Devin Rousso (DRO)
DRO: Hello. I am Devin. So yeah. This is a declaration in conditionals for Stage 2. Just to give a little bit of background this was first presented in 2019. And then I got busy and am now trying Stage 2. A brief overview of this, imagine we have a class with a getter that does some work. Something expensive. Sometimes it does, sometimes it doesn’t. This is contrived but most people who work in JavaScript can imagine a scenario where there’s some function that returns large objects that are temporaries.
DRO: This is a pattern I have seen a lot. It’s usually not easy to identify that this is something that is bad just by looking at this code because you don’t necessarily know whether the bar getter actually is expensive. Besides it being very expensive to call, the work is being repeated multiple times in this code. You don’t necessarily know that the first call could be an array and the second could be null.
DRO: Just because of the nature of the logic, because you are dealing with temporaries and transient values, you don’t know what you are dealing with. You could save to a variable. But when you have multiple things, you have to have multiple instances of 1, 2, 3, 4. You can’t use the same identifier. And because these things are declared outside of the ifs, they exist for longer than you might ideally want. For example inside the if for bar2, bar1 is kept alive for the period because you don’t necessarily know when it will go away.
DRO: So what if instead we could move these variables to be inside of the ifs or to be part of the ifs. This is something if you have dealt with other languages like C++ and Rust and Swift. They have the capability. The idea is to scope the variable inside of the if statement while using it in the if statement itself or the condition of the if.
DRO: The main benefit is that we can reuse the same identifier as many times we want. Regardless how many times we are doing things, like if you did with trees where the values have data, data left, data right you can reuse data as it becomes a more fluid and easy to use process. This proposal extends that to allow you to use customizable conditions. So if you wanted to instead of just doing the normal truthy operation, give me an array with a length, with a length property. You can do with multiple identifiers. In this case, it requires you to provide a condition expression. So it wouldn’t try to do some magical thing where it ands them all together. You have to provide something explicit for a condition. It also supports destructuring with the rules I mentioned about multiple variables.
DRO: So the actual syntax of this, part of the reason it took so long I thought it was a nightmare to pull this apart but it wasn’t that bad. We changed from a generic expression to allowing us to write let or const or using for the single identifier cases and this is automatically truthy checked like any variable or expression inside the if. And then if you want to do something more complicated like destructuring, you have to provide this condition expression which could do anything you want. It doesn’t have to be the variables if you don’t want to. You have to give an explicit condition so it’s not assuming something magical. The same with while statements as well.
DRO: Same exact stuff. Where you have the single identifier case. That automatically truthy checks and the multiple identifiers in the case of destructuring or multiple variables that require the explicit condition. That’s my proposal in its current shape. There is still one sort of big looming question as part of this. And that is, whether or not the new identifiers are visible inside of the else as well as just inside of the if.
DRO: I think everybody that I talked to say they should be available on the if. But whether or not this is extended to the else is the big question. I will do my best to briefly summarize the arguments for and against and present my thoughts to keep us moving quickly.
DRO: So if it is exposed in the else, it allows you to basically to introspect what is wrong. With the custom condition expression it allows you to understand if one of the two is falsy or maybe it’s NaN versus empty string or some other definition of what falsy means. It gives you more understanding as to why things didn’t happen. And it lets you do more expressive things with that.
DRO: On the other hand, you can redefine and reuse the identifiers. In my opinion, it makes things like using a lot nicer because it lets you much more narrowly define when things are alive. Along these lines, the main arguments being exposed in the else is that the transpiled version is effectively just wrapping the if inside of a new scope. And creating the variable locally there. Which isn’t great for a couple of reasons. It becomes syntactic sugar. You create a new scope which is not great for engine reasons. Again, it doesn’t provide you with a new capability in the language. Whereas scoping only to the if is something you easily can’t achieve otherwise.
DRO: The transpiled version of it is awful mainly because of completion values. It's a more difficult thing to achieve. I have gone through it quickly, but my personal thought as to whether or not it should be available in else is that it shouldn’t be. Don’t put the variables inside the else. Only have it inside the if. But with that being said, all I really personally care about is if it's available in the if. If people want to prefer in the else, I am fine with that. I don’t think I will use that feature personally. I was using the C++ capability as an example. And I didn’t know that it was exposed in C++. I don’t necessarily care if it’s available in else. There are people that have strong opinions and I hope to come to an agreement and move forward. But that being said, so long as we have in the if, that’s what I am happy about. So Stage 2?
DRR: Hi. I am wondering, in the example where you had the if else, where you had another variable scope if, like if what,—
DRR: Further back. When I have let data = blah. Would that—data declarations really conflict or not introduce a new scope where data, the subsequent data, the second one could exist in its own sort of separate scope?
DRO: So my thinking with this is that if it is in fact limited to the if, then this would be allowed. Because I think this is a valuable pattern. If it is exposed to the else, I am not sure how you could do this without running into conflicts given the fact that the second if is sort of like inside of the else. I could be wrong about that. But that’s my understanding.
DRR: I think it would be kind of strange to—because what you are saying kind of implies if you had a let data outside of the original if, that would conflict with the immediate if let data. So I think that, you know, under sort of—the spec I would imagine where you do allow else to view the variable, there should be no conflict between those variables. That’s intuition. I don’t know if there’s any issue. In other words, the else if would shadow the original one, if you look at—the queue Keith Miller has something similar there.
DRO: Again that is very possible. I am not completely familiar with the exact nature of this. Yeah, I suppose it could just shadow and be fine and work that way. So maybe this wasn’t exactly the best example for one of the reasons it would be better if it’s only in the if.
DRR: That’s fine. I would like to talk more support for else in a bit
KM: Mental model, if it is just a single statement, a block is a statement that has many statements inside of it. And so then when you say else if, you have a new statement that is like internally has a new scope that it creates which is the data. And then the data would shadow the outer one because it’s a new scope. And the same way it does for any other blocks.
DRO: Yeah. I mean, that—that sounds reasonable.
CDA: WH is on the queue asking asking about the spec
WH: That’s what I am asking about. There’s no spec there. None of the new statements are defined in the spec there.
DRO: That could be possible. I have never written a proposal or a spec thing before. So it might be that I am missing things. If that’s the case, my apologies. If there is more that needs to be added, I certainly can do that and try this again. But my understanding was that the syntax was sort of enough for the initial idea of what needed to be done as Stage 2 entrance criteria. I didn’t know I needed the entirety of the semantics.
WH: For Stage 2,you need the semantics defined and right now there are no semantics.
DRO: Do they have to be done in the document or the proposal?
WH: In the spec language.
DRO: Well, that’s my misunderstanding, then.
CDA: There’s a point of order. PFC says a spec is not necessarily for 2. I think that’s definitely—
JHD: The process document requires a spec to use—I am not reciting it from memory as I used to. Major
CDA: One acceptance criterion for Stage 2 is: initial spec text including all major semantics, syntax and APIs. Place holders and editorials are acceptable.
KG: Yeah. I was going to agree. There’s a lot more that needs to be done with the spec. Sorry, I would have called this out if I had realized earlier that was the state that it was going to for Stage 2. There’s a lot that needs to be done, especially for using, like—making it dispose before the else. There will be some work.
MM: Okay. So the semicolon solves the real problem, I like the—the nature of the problem that it’s solving. My only misgiving about it is that given the—what semicolon means in the header of a, you know, for, not a—not a for—the header of a regular for, if seeing the meaning of semicolon in this position is just confusing, but I don’t have a better suggestion. So it’s just misgiving. I agree that there’s a genuine problem that it’s solving well.
OFR: Why is it confusing? The second expression is the condition in the for. It would be the same here.
MM: I’m sorry. Olivier is asking me why is it confusing
OFR: You could write this as a form as well. Sort of. In the same of the for, the second expression is the condition and it would be the same thing here or am I misunderstanding something
MM: Oh.
MAH: Nothing prevents from having a second or third semicolon
MM: my issue goes the other way. Having there being least surprised because it’s analogous is great.
KM: The semicolon between the declaration and the condition I think is pretty standard of the C-style languages that use—that have the if declarations. So it would certainly be moving away from the other—the design of many other languages if we chose not to either—not support that at all or to pick some other syntax we would be carving our own path which in some ways creates more complications than it solves. And you know, as it is analogous
MM: The more analogous to things that exist in the word, the happier I am and now I support the syntax. What are some examples other than for in other languages?
KM: In C++, they do this exact syntax. You say, if auto or whatever it is, thing semicolon and whatever expression you want to be that evaluates something that is truth—a condition, it could be anything that implicitly converts to a Boolean and I believe that’s true in Rust as well, but I am not 100%
MM: I am completely on board with the semicolon.
RBN: So I have brought this up on the issue checker as well. I am—I am in principle not opposed to the proposal, we do have since talking about entrance criteria for Stage 2. I have some concerns that this has some overlap with the pattern matching proposals is in fix whether that ends up called is or something else expression as pattern matching proposal as it currently stands, introduces the ability to match patterns to variable bindings and wholly subsume this proposal with the exception of using. So an if condition or a while loop all of those things are completely possible with the—pattern matching is expression and let variable patterns and I think it would be important for us to have a discussion with you, with the champion for this proposal as well as the champions for the pattern matching proposal to discuss how all the things fit together before we say that this is an accepted solution to—the committee to advance it. It’s premature to discuss that until Stage 2.
SFC: Briefly, mentioning that I’ve been running a lot of Rust code and I find myself wanting to use Rust’s if bindings that work like these separately from when I use match statements. There’s a world where both of these proposals exist. I don’t think one subsumes the other.
KM: Yeah. I agree with SFC. I think you can use both, in certain circumstances. And like we have the syntax to do it—the syntax seems to overlap and they serve use cases, depending on what you are doing
RBN: I am not saying—this proposal should never advance with pattern matching. We should have this how the cross-cutting concerns should be addressed and before this can advance to Stage 2.
MM: Yeah. Just what about switches? The same logic would seem to apply to the switch head being visible across the body of all cases.
DRO: Yeah. That was something that was originally asked. I mean, I’m not opposed to that being added as a future thing. I think my thinking with switch was the fact that you already have the sort to enumerate or have the capability to enumerate the cases, so you already have the value you’d get from the variable. Certainly if there is a desire to have all of this done at once we can move to add switch into this proposal.
MM: Okay. My preference for all to be done at once.
CDA: + 1 to switch from SHS and a reply from JAD.
JAD: Isn’t quite different with switch because you wouldn’t have the condition with switch because the conditions are inside the switch statement
MM: Yeah. So that’s actually a good point. There’s no condition. What you are doing is, you are producing a value and you are allowing a declaration to bind the value you are producing so that you can refer to the value—to the value across all of the cases with the variable that you bound. And I have written a lot of two-line switch headers to define a variable holding—variable = expression of the thing I want to switch on, switch on the variable and then refer to the variable in the body. I do that all the time. So—and if it just makes as much sense to switch, and it’s understood in the same way, which I think it would be, switching on a value, you can think of an if, as a special case, where you are switching on the truthiness of the value. I would just like to see the switch included. It’s not a strong thing if the switch is not included in the proposal. I would certainly not block on that.
JHD: I think the switch is gross and bad and no one should use it and I don’t want to see it get better. That’s why I am involved with pattern matching because I want to be replaceable wholesale. The other thing that is sort of related is all the examples used—have curly braces. Any good codebase should include them but people omit them often. And I would have to think about what the scoping should be there. Maybe it’s just within the one line. Maybe it doesn’t work unless you use the curly braces. But I think switch has similar problems.
MM: Are you saying the body of a switch does not syntactically require curly braces around it?
JHD: I think the body of a switch specifically might but case bodies do not
MM: The case nodes are nested
JHD: There’s a huge lint rule for it, 10, 12 years
MM: The problem with, you know, variables across, you know, being used by cases that are defined by cases, is that the variable being in scope versus the variable being live has no sane corresponds. That doesn’t have that problem
JHD: A lot of discussion around whether it’s visible in the else in matrix has opinion pointed out that if you use in both branches, you should wrap it in a block and put it in a statement above the if. That reasoning applies to switch. Have it available in cases. And define it above the switch. Then you don’t need additional syntax. It adds value when used in a subset of the branches.
MM: No. That’s not true at all. The syntax is simply adding value by being able to put the—just have the expression define the variable and then use it, is a minor syntactic convenience. You could do without it. The absence I would find surprising given the presence of this for if and while.
JAD: I just want to check mark. Are you saying, in—when this is used in the switch, it wouldn’t always be processed as a conditional—even if the value is falsy, the body of the switch would still run?
MM: Yes. Like I said, you can think of the switch as kind of a generalization of a conditional, where the conditional case switches on the truthiness of the value. But in any case, all of the counterarguments, I still have my preference, but it’s getting milder and milder and it’s certainly not blocking. So Devin, I would say, don’t worry about my switch issue, if it’s—if there’s enough disagreement.
CDA: + 1 to switch from DJM
NRO: When reviewing proposals internally, some people had expectations if let foo = something, check if the something exists. So if it’s nullish to have these declarations and that’s how it works with current expressions in JavaScript. I don’t have a suggestion. Just please keep this in mind and you find it like a lot of people have one specific expectation between the two.
CDA: WH is also + 1 for switch.
OFR: Okay. Yeah. I think this got moved around a bit. It went a bit fast for me here. I need to ask some questions. So you would want the scope to exclude else? Right?
DRO: That was just my preference.
OFR: Okay
DRO: I don’t have a strong way, one way or the other. When I originally proposed this in 2019, there was discussion. I want to put the variable in scope to an if statement. Whether or not it’s used in the else, I don’t have a strong preference.
OFR: Okay. You were saying that the complicated transpilation is an argument for excluding the else
DRO: Yes. I know that sounds crazy. I was trying to think in terms of if you do allow it in the else, then you—the transpiler—you put in a block. If we focus on things that add outside value or non-simple valuing to the language to focus on things that are difficult to achieve otherwise. That was sort of my viewpoint for it.
OFR: Okay. Fair. All right. So then maybe my comment, I think—especially if like this long if else chains, there is not even a way how you could syntactically write it as you say, transpilation needs a function. You cannot actually write it syntactically, you cannot say, okay. It starts here and it ends here with curly braces as far as I understand. And so that I would think is a bit concerning. Yeah. Like, I think historically, it’s difficult for implementation and transpiler to always get the scopes correct. That might be an argument to do the simpler thing. And yeah. Especially if you think about the future interactions with—with using and with all of these kinds of things, it could get quite tricky to get it correct and understandable. So I think slight preference for making it—for making it visible in the else as well.
CDA: All right. We have a couple more minutes. There’s a reply from NRO
NRO: Yeah. I think the transpilation would make it difficult. Regardless of which direction we go, we give a unique name to the variable. We need some use cases. But that’s not the common case. Like, the common case is letter const. It could be transpiled the other way.
CDA: There are several items in the queue. DRO, do you want to take a look at the queue and cherry pick?
DRO: Is there anyone that is particularly in favour of the idea of not exposing it in the else?
GCL: I am very strongly against it being available in the else. I can expand on this, if that would be desired
DRO: I would love to get a sense as to why.
GCL: In my mind, the like the—purpose of this proposal to exist to introduce a binding, that—and predicated on some condition, and if that condition does not pass, I do not want that binding to be available to other code in my program because it’s been deciding the binding is not value for some reason. If the binding is available elsewhere,I think it's an antipattern to use it. You use cases where data has validated in some way where it hasn’t. This is something we have seen like Kevin pointed out in the element recently, on the matrix sorry, that Rust changed their behavior. They made a breaking change to all things to work to enforce what I described with respect to drop code because it was causing bugs in assume production code to have Canada that outlives or accessible in the alternative of these conditions. So yeah. I would be—I am very against these bindings being available.
CDA: Also + 1 against from RKG and KG. Clarifying question from Ashley?
ACE: I guess addressed to GCL and the crowd this general, pro, not being exposed, is it not there at all, meaning, if there was something in the outer scope, a data higher up in an scope, you could reference the outer one or is like data in a TDZ in the else to like so it—it’s there, but it’s not being defined in you tried to access it, it would throw or go to the outer scope
GCL: I think the right side of the screen is like what I would expect.
DRO: My understanding would be that you would be able to access the outer data basically. So it wouldn’t be a TDZ or anything like that.
CDA: All right, we are on time. There’s still a lot of things in the queue. I will capture the queue, if we have time for a continuation later, that will be swell. But it doesn’t seem like there’s a path to Stage 2 today. So thanks, everyone. We are now going to have a short break. Please be back in the room in—at the 20 minute mark, that is just under 19 minutes. Thank you.
Speaker's Summary of Key Points
- The proposal changed to now also allow multiple variables (e.g. comma separated lists, destructuring, etc.), the new using keyword, a second optional conditional expression (which is required if multiple variables are created), and only scope the variables to the if block.
Conclusion
- More work is needed on the spec before advancing to stage 2.
await dictionary for Stage 2 or 2.7
Presenter: Ashley Claymore (ACE)
- proposal
- no slides
ACE: I hadn’t realized that we’d got through the agenda so quick.
[Awkward pause as we wait for ACE to log into Zoom to show the proposal. The dead air begs to be filled with something. Anything]
RPR: When I lost my dictionary, my wife asked if I’d looked upstairs, and I told her, I can’t look up anything.
ACE: I’ll go before Rob tells a second joke. So we talked about this, I think, in the last meeting. And only a little has changed since. But recap for people.
ACE: So if someone is doing this currently in their code, then you’re creating a waterfall. So we’re not going to start getting our lovely color or getting this mass value until we finish getting our shape. So people might refactor to use Promise.all, get everything in flight. The thing here that’s annoying is that oh, whoopsie daisy, I’m binding color the shape. I’ve got my things mixed up. This example, in previous times I’ve shown this, it’s maybe not too bad if it’s only three. It just gets worse and worse and worse the more complex this gets. It gets particularly complex when you end up with, like, conditionals in this list, so it gets even harder to even try and start counting, oh, this is the fourth thing and then count back. So you could do this today, get things going, give these variables, people like call those like shape, capital P, that is fine. One of the down sides here is that you—there’s two things that are kind of bad here. One thing it kind of annoys me you kind of end up with twice as many variables in scope. You can scroll down later on, and when you’re typing down later on, your auto complete is going to have twice as many anythings showing up, even though after you’ve awaited shape request, effectively want to delete that from the scope, because really, why would you ever go back to the original promise if you’ve already resolved it?
ACE: The other thing, which is perhaps more important, is so when I await shapeRequest, I’ve not actually associated any handlers with the other promises. They’re just like floating, and I have to wait. It’s not until I’ve actually awaited the first one that I’m going to attach the handlers, meaning if multiple things reject, they might reject with no handler and then you get a non-handled Promise rejection.
ACE: So this proposal adds a little API. It’s like Promise.all, but it’s Promise.allKeyed where you name all the promises and then you get back a promise of an object using those same names. Now it’s much harder to muck up the order. The naming follows the naming we’re doing Iterator.zip, because it’s the exact same pattern. Zip takes an iterable and then gives you back an array. Promise.all gives you back an array. zipKeyed takes the named keyed iterables and gives you an object with that shape. Promise.allKeyed follows the exact same pattern.
ACE: The change since last time that we discussed is: "should we do the same for Promise.allSettled?, and previously, there was strong support, yes, we should also do this for Promise.allSettled. So we do. So the spec now includes the text for that.
ACE: So it’s currently at Stage 1, but I think we have everything to go to Stage 2.7, but if people don’t want us to go to 2.7 for some reason, happy to hear that. There’s no need to rush any of this. So, yeah, asking for stage 2.7 and then see how we go from there.
JRL: Okay, so the queue is pretty light. JHD?
JHD: Yeah, I just wanted to say worse was the unhandled Promise rejections. For me, what’s even worse than that is that it’s unnecessarily serializing those three calls. Like, imagine they were all three network requests or something. Instead of firing them all out in parallel more or less, concurrently, and then returning back when all three have finished, you’re not even firing the second request until the first one has completed. This is a common problem in general with await that Promise.all service to, and all keyed just makes the solution even more ergonomic than with Promise.all. So it’s a great thing for performance as well.
JRL: That’s it for the queue.
ACE: Are there any voices of dissent?
JRL: So you’re asking for 2.7?
ACE: Right.
JRL: So are there any objections to Stage 2.7?
CDA: I think we should ask for support first.
JRL: Oh, sorry. Does anyone support it?
ACE: Yeah, it’s on the queue, MF, DLM, DJM, GM, WH, which is great. Thanks, WH. CDA, JSL, CHU.
JRL: All right, so lots of support. Are there any objections to stage 2.7? Then I think you just saved us 20 minutes. Congratulations.
Speaker's Summary of Key Points
- Re-presented the motivation for the proposal and its relation with
Iterator.zipKeyed - The proposal has been updated to include
Promise.allSettledKeyedas discussed previously. - The proposal is currently stage 1 but is asking for 2.7 as the specification is complete.
Conclusion
- No objections
- Support for going directly to Stage 2.7
- Progressed to Stage 2.7
Intl Unit Protocol
Presenter: Shane Carr (SFC)
SFC: Excellent. Okay. I want to do the same exercise with—I think I’ve got this mostly queued up and ready. Okay, are my slides visible? Excellent. And I’ll make them full screen, and I was just handed a sheet. It will either stay here for my presentation or it will float or I’ll—I’ll do my presentation.
SFC: Also, I obviously can’t help with the notes while I’m being presenter, so someone else will have to cover me. Okay, Intl unit protocol for Stage 1. So a little bit of background. As you may have heard—
SFC: Cool. So in the background, a number ought to be annotated with the quantity it is measuring. This is a statement you’ve heard me and others make previously with regards to some other proposals such as the measure proposal and the amount proposal. One of the main reasons is because it unlocks item features, such as automatic unit conversion, so so the unit—the key thing is that the unit is part of the data. It’s not a formatting option. It’s a fundamental part of the data model. And we want to keep it that way. We want to keep the unit associated with the number that it is formatting. This is just a little code example for a hypothetical MessageFormat, which doesn’t have to be first a first priority MessageFormat. It could be a third priority. And the message remains the same. This a little background of the use case. So proposal is as a step in this direction, we introduce what I’m calling a unit protocol. So this is a little snippet of what the code could look like. This code is—I put copied from the read-me file, which was available in the materials at the agenda deadline. These slides you just put together recently. But all of this content was in the read-me files. So let’s say the inputs you are locale, unit and number. Currently what you do is pass in the unit and the constructor as well as the locale and the constructor and you pass in the number and the format function. The proposal are allowed the optional specify the unit in the format function. You don’t have to specify it in the locale. That’s on the right side of the slide over here. We—you can see basically we take the unit, which is currently always in the constructor, and we move it down into a new options bag that we have in the format function. I’m using—I’m using a modern EMCAScript syntax here, so this is implying that the keys of the objects are named, number and unit. If that wasn’t clear.
SFC: So I just wanted to walk through a couple of the caveats here, caveats that we know about. And there’s caveats we don’t know about because it’s early. And one main ones is construction versus formatting, so Intl has long had this concept of you have options that go into the constructor of your formatter and then you have things that go into format function. And there’s two main reasons why Intl has the separation. One of them is that it arrows the object—the formatter object to encapsulate the locale and other locale—and other formatting options. So basically the Intl part, the i18n part. This is—this allows, for example, if you’re creating a template engine, you can have your formatters treat it according to your template, according to your messages and the data is passed in later and passed in the format functions the second region is because locale data can be initialized ahead of time, and by putting it in a constructor you can actually—you and take a lot of that work and make—and do that work ahead of time so you can reuse the formatter multiple times in a tight loop and it makes code more efficient. Those are the two reasons.
SFC: One caveat with this proposal is we advance one because we have better separation of the data model concerns from the formatting concerns, but it also has an impact on two, because we can’t preload the display names for unit we’re formatting because we have to wait until the unit is available. I’ll point out that there’s other cases, for example, when we’re formatting date times, where we don’t know what the month is going to be until you give us your date, right? So we have to—so we currently preload, like, the different possible, like, month names for calendar. And I think that there’s various steps we can take in this direction to minimize the impact on performance, such as, like, I can go into more detail on that, but basically by keeping all the display names readily available so they can be accessed without having to do an expensive data load operation, but it’s not going to be zero impact, so that’s—if you had asked me, what’s the downside of this proposal, What's the cost? It’s this.
SFC: Cool, next slide about currency. So one of the styles in Intl.NumberFormat is currency. We’ve had this since version 1 of Intl.NumberFormat. And currency is units like, so the propose—my proposal is to include currency as part of this unit protocol. And open question I have that we don’t need to discuss in this group, but we can if we want to, I imagine we can discuss it also in TG2 and other venues is whether or not the name of the option or the name of the property in the field in the options bag that gets passed into the format function is named unit or currency if you’re trying to format with a currency. So that’s an open question. Something to discuss. Not necessarily today, but we could. Conflicting units is another caveat. I hope this is not controversial. But if you specify a unit in both the constructor and in the format function, and they’re different, then you get an exception. I hope that’s not controversial.
SFC: Precision, so thanks to Intl trailing zero, we don’t actually need to worry about this. Precision is retainable if you’re using a string as the number type. So the number type accepts all the number types that Intl currently accepts in the format function. Which includes the number type, the BigInt types and as well as string decimals, and this is not changed. It’s just that now you can associate that also with a unit. I also want to point out options for stepping the protocol. One question that I got was, well, what—can you also associate precision with other types like numbers? Maybe you want to say what’s the number of significant digits of a number type? And that’s something that we could add in the future by adding another field into the object. Other data oriented formatting styles, people often muse about, well, how do you do rational numbers, one-third of a cup, for example? That’s something that could be supported here via an additional extension to the protocol. Also, different precision styles. A lot of these things you’ve also heard about in context of amount. But I have another—that’s actually on the next slide. I won’t get ahead of myself, but another type of thing you can add to the protocol is data precision styles. People ask me, how about margin of error? Significant digits is not always the right way to do precision. So it’s extensible in this way.
SFC: Impact on amount, let’s talk about this. There’s the amount proposal, and this protocol does not replace the amount proposal. It’s a sort of split out from the amount proposal. So it inherits a lot of the design of the amount proposal, and actually lot of the design space of the amount proposal should be discussed in the context of this one. Some these questions about what annotations to support and so forth are actually part of this proposal. We’ve already established, like, consulting with TG3 and others that the amount proposal sort of requires having a unit protocol. So this proposal is the unit protocol piece of the amount proposal, if you will.
SFC: But one of the key questions, and I think something that we should discuss today with the time that we have, is whether or not when we design the protocol, whether we should design it for amount or whether we should design it for cases where you’re not necessarily using amount. And I think that there are tradeoffs here. Because when I say protocol the design, I prose you have an objects request two fields that have specific names, and that makes it very ergonomic to use it if you’re just using the protocol. But—there’s other designs. One of them is that you could have functions. Functions have some advantage if you’re using—if you’re using this to get an object, because maybe you don’t have to precompute thing you may have otherwise wanted to precompute. So functions make it so that if you have the getters, if you just have getters, you don’t have to do expensive operations and the getters having functions a better Stein for that. We’ve previously discussed this with Intl. And Intl locale which we discussed earlier, and one of the biggest changes we made to that no proposal is we started with getters and switch them to functions and the functions make it more explicit that the operation is not free. So functions could be one design. If we do it that way, it of course regresses the usability of the protocol if you’re just using it via an object.
SFC: Another option is you have Symbol functions, which is like, you know, Symbol.iterator or something like this, right? Which we have precedence for this elsewhere in the language. This really encapsulates the protocol very tightly. It still allows a third party amount like object to be implemented in userland, but it makes it so that it’s not easy to accidentally use and so that the protocol is sort of very much self-encapsulated and doesn’t feed into the design of the object that is implementing it. Another one is we could do, for example, an annotated string. Like, what’s shown here. And I have increased numbers of question marks on each row, because I’m not protesting anything specific. I’m saying like, a general shape, right? Like, the further down the system I get, the more abstract the shape could get, and I just want to make the point that a protocol could have multiple different shapes, so that is sort of brings up to the key question, because I think protocol I had on the slides, and the protocol in the read me file is very much oriented towards using the protocol directly. However, there’s some tradeoffs, and exploring one of these other directions where the protocol could have advantages in a future world with amount.
SFC: So I wanted to have a discussion, and then ask for Stage 1 to explore this problem space.
JHD: So my first thing on the queue was on that page where you said hopefully this is not controversial, I’m fine—I don’t care if it flows an consequence or not if you have conflicting units, but my question is what if I have formatter somebody gave me where the units are already defined, how do I override that? We designed regex explicitly for that use case.
SFC: That’s a good point. Maybe when I said uncontroversial, I didn't—that’s month something that I thought of when I wrote that slide. So, yeah, that’s a good point.
JHD: It’s certainly fine as a default.
SFC: It’s a good observation.
JHD: I’d love to see what the way is to avoid the error when I’m trying to do that.
SFC: Yeah. Cool.
JHD: Okay, and then my second topic is a bigger one we discussed bit last night in numerics breakout. In general, I feel like a protocol should have some sort of first class exemplar. We had thenables long before we had official promises, and it’s great that There's a protocol. Certain people^ over the years have feelings about the string-based instead of Symbol-based and so on and so forth. But nonetheless, it’s great we have a protocol, but everyone only really uses promises now. It’s function return, and you can await them. The fact that you can make custom thenables is relevant, and most people don’t really care. So it’s still good there’s a protocol. But, like, the thing that everyone thinks about is a promise. There was a lot of confusion around iterator helpers because iterable is a protocol that doesn’t have a thing. There isn’t a first class iterable. Iterable is just like a trait, a protocol you stamped on to something. And unfortunately we don’t have yet MF’s first class protocols proposal, which I still very dearly want and would address this concern perfectly fine. So I can’t say here is the first that is the thenable protocol. Here is the thing that is the unitable protocol, whatever you want to come up with your name here. And so in the absence of first class protocols, I would expect something like if this protocol is representing an amount, let’s say, then I would expect amount to exist and provide the protocol. Meaning the functions that take the fooables would of course read the protocol. They wouldn’t look up, you know, intern slots or anything, but users would still have a first class thing to say this is a real fooable and not, you know—or this is—amount is the real amountable, even though the functions take amountables. In the same way as, you know, you can say your function takes that thenable, but really a promise is the thing. This is sort after design principle that everyone may not share, but that’s my—like, I have this discomfort in general with adding a protocol that doesn’t have an associated first class thing.
SFC: Yeah, just another observation I want to make is that this idea of having a protocol in this context, it’s quite novel in Intl as well. Because all the things we format in Intl have a type in language. And then the international is like, take that type and make a human readable for me, right in and that is very much to the Stein of how international worked for a long time, and adding a protocol to solve this use case would be somewhat novel in the Intl world.
JHD: I wanted to add one more thing. The set methods that we added, they—the receivers have to be sets, but the arguments are set-likes for some definition of that. And similarly with Temporal, the—there is an options bag, a protocol, to represent each of the Temporal types. But you can convert that into a proper Temporal object as the first class thing, so like, I think that there’s a lot of precedent for some form of the design idea I’m expressing.
JRL: Okay. Nicolo?
NRO: Yeah, if all the options, I would go with the one that would pass an object with two string properties, like unit end, number, whatever they were. I would not overindex on this being some sort of protocol. This is like, to me, I have an amount and then we say amount has the same prompt, but looking at the slides, what I’m seeing here is just an options bag. It’s not different from many of the places where we have options be a bags. And there’s no syntax that recognizes this. It’s just literally like two functions that take the same shape of options bag. This is different for the Symbols or, like, methods, because that’s not something that people would, like, manually write in an object. Probably would have some class of library that creates it, but this feels much simpler than that. It’s just a naming thing. It’s not really a design question.
JRL: Do you want to respond?
SFC: Yeah. I mean, it’s an observation that when we say protocol, what we really mean is, like, the object that’s passed to as a single argument to a function has to have a certain shape. And a lot of times, that shape is an object with fields. It’s an options bag. Like, in many—I would actually—I think we would be appropriate to use the term protocol to describe that. So—but I think the reason this is called protocol was because that word had been used to describe this sort of corner of the amount proposal, and that’s where the name protocol came from. I could have named this proposal the Intl.NumberFormat formatable options bag proposal. Right? So I don’t think that it’s necessary to really draw a distinction there. I think that either word is appropriate. That’s a good observation.
JRL: I’m actually next. To respond to that directly, if we are to amount, amount doesn’t necessarily need the follow this options bag. It could just special case in the format method. Like, if you receive an amount, then do this with the amount methods, or if you receive a regular object, it is expected to have these two types. That would be totally Fine.
SFC: Yeah, so feedback we got from TG3 was that we want two avoid reading internal slots of arguments to functions. Where the argument is not the, like, this. You did read peeleds—you can read sperm slots when it’s this argument, but not when it’s just another argument to a function.
JRL: Yeah, so I wasn’t suggesting that. Use whatever the public interface is for amount. But the amount interface does not necessarily need to follow amount and number.
SFC: And you’re introducing yet another protocol, right? Because then basically, the—when the format function needs to do a brand check and be like, first let me try to cut to see the in is an object and read fields named unit and field named number, these folds don’t exist, then I try to read the amount Symbol getter function. Right? And I think that it would be better to just have a single protocol that we think solves the use case. You’re correct that I suppose a—like, you could have this proposal and also have an amount that implements a whole other protocol, maybe the Symbol function one. You could have both. Yeah, that’s a good observation. You could have both.
JRL: Okay. Leia?
LVU: I also wanted to push back little bit on the not controversial thing around conflicting declarations of units. And ask whether this really warrants an error condition, because it seems to me the author intent is clear, both at the definition point and at the format calling point. So I wonder if it might make sense to treat it as a default. And presumably, you can call it with or without a unit in both cases, co-would it make sense to treat the unspecified the constructor as a to fault, and if you use one in format, then that is used, or if you don’t specify one, you fall back to the default? That seems more flexible. And that is kind of similar to Jordan’s point, but yes, even if you can create a new one, that seems kind of annoying and finicky and it seems better if you can just specify it right there to override the defaults that are internal.
SFC: Yeah, this is a good—actually, both your comments sort of made me think a little bit more about the design space we have here, because it was something of this shape that also has been the basis for the smart units proposal. So it could be the case that, like, if you pass in an amount object thing here with unit kilometer, and but the formatter is unit meter, maybe it converts the kilometers to meters for you. Maybe that should probably be an opt-in feature, not an automatic implicit feature. I think there’s definitely design space here. And that sense, throwing a range error for now gives us flexibility in the future to explore some of these other types of behaviors.
JRL: Okay. Daniel Minor.
DLM: Sure, first off, I want to say I support Stage 1 for this. I do think this makes sense, and it’s an area worth investigating. I guess I just have one question, and that is around the timing of breaking this out from the amount proposal. Do you expect this will advance more quickly, or is there a particular reason for doing that now?
SFC: I think what we’ve been doing with the—in the numerics group has been trying to identify what are the individual questions that we’re really asking of committee, and one thing that—a topic that has come up several times in the last several plenary meetings has been this idea of, well, there’s actually a protocol that’s hidden inside the amount object, so I wanted to split that out in order to—so that we can actually focus on some of the questions that parole cool brings. And the amount is more focused on the amount it service to. And in presentation is not intended to imply that, you know, one advances more rapidly than the other, although, since this is a subset, I would expect that this would advance first, yeah.
JRL: There’s nothing else on the queue. You’re asking for Stage 1 advancement?
SFC: I would like to have Stage 1. I believe that this is a problem that it is worth investigating. We already have Stage 1 on amount, and if—and I think that actually, when I was writing up the proposal, I actually found some edge cases that actually we didn’t even consider the amount proposed. So I do think this a useful proposal to investigate on its own merits, so I think that I’m asking the committee for Stage 1.
JRL: Okay, so we’ve already heard from DLM for support. On the queue, there’s also Chris and James that are supportive. Give it just a second in case anyone else. If not, is there anyone that objects to stage 1 advancement.
JHD: I’m not objecting to Stage 1. We’re just exploring the problem space, but before pursuing stake 2, let’s work out offline those questions about first classness and conflicting units.
SFC: Absolutely. And things like that. Absolutely.
JRL: Okay, Dimitry is also supportive. So hearing no objections, I think you have Stage 1.
EAO: Sorry, just could you reiterate what the problem statement is. What is it that we are exactly selecting Stage 1 for?
SFC: Right. Good question. So the—we believe that there is a—that from the internationalization perspective, we believe being able to associate a unit with a number is useful—it’s important for internationalization purposes. And we’re exploring methods—we’re exploring a method for associating a unit with an amount that is fully encapsulated within Intl.NumberFormat that does not involve adding primordial to the language. There’s another proposal that adds a primordial to the language for this same use case, and this is a narrower proposal that’s motivated by—has a similar motivation, but is a narrower version.
EAO: So I think the problem statement is like the first sentence of what you just said?
SFC: Yes.
EAO: Cool.
JRL: Okay. With that, I think you have Stage 1. Congratulations.
SFC: Thank you.
Speaker's Summary of Key Points
- Committee agrees that it's valuable to associate a number with a unit as a formatting input
- Open question about how primordial Amount and Intl Unit Protocol should be prioritized and how that should influence the shape of the API
- Feedback on the behavior of conflicting units to be worked out before Stage 2
Conclusion
- Stage 1 for “explore associating a unit with a number”
Import Text
Presenter: Eemeli Aro (EAO)
EAO: This is a bit of an odd approach at a proposal, as I’m trying to maybe speed run or min/max or something like that on how far a proposal can get with minimal effort. Because I was not successful in my earlier attempt to just expand the import bytes proposal to include this. The idea here is to import text the same way that we’re proposing maybe we can import bytes. The problem statement is that in pretty much if same way name porting JSON is useful or importing raw bytes is useful, importing text is useful, and it should be just as easy and pretty much work the same way. The way you can currently import text is, well, with the fetch API, we can already do that in many places where you await on a fetch, and then you await on the response.text() of that fetch. And if and once we get import bytes, which is currently at Stage 2.7, what you can do is you can import a byte array from da da da with type bytes and then you can use TextDecoder and decode what you just got there. And then the other possible solution is to do what is proposed here, add a new type text that lets you do the thing we want.
EAO: The current state of the art of what we have is a little bit suboptimal, for the same reasons as it’s suboptimal for importing bytes. The operations are always asynchronous, and it starts late in the execution of JavaScript, and from a browser perspective, all of the relative paths are rooted at the page's location rather than the location in the module from which you’re actually doing the impart, and so we have this preferred solution that this statement should, like, just work.
EAO: There is no intent in proposal to add any other way of customizing how you get the text, so technically, the text you get is defined by the host, the encoding, sorry. But in practice, it means that you’re doing something other than UTF8, probably needs to do something a little bit different. First, rationalize for yourself why the heck you’re doing the thing you’re doing, because you probably shouldn’t be doing that, and secondly, import as bytes and then explicitly decode it. So, yeah, my first question is can I have stage 1? And I do see there’s a queue.
LVU: I just wanted to express support. I needed this many times. I wanted to have the use case where I think this is very useful for paving future structured formats that we may later want to add a specific type for. For instance, right now we have CSS that we can import in the web platform with type CSS. It might have been a better path for authors if they could initially import the CSS with type text and then later once we got actually importing an object, then they could upgrade to that rather that than having the all or nothing situation they have today. Same for JSON, you could more ease import it as text and then JSON parse, and you can see the same thing for future structured formats. Suppose your data is in a YAML file and not in JSON. We don’t have type YAML yes, you could fetch today and there are many situations where meta.URL is not reliable. For example, today bundlers mess with it. Yes, they shouldn’t, but it does happen. And something like this would be much better. Yeah.
JRL: Okay. We have several plus one support. One from Chris, one from Michael, skipping straight to James.
JSL: Yeah. Expressing support. I will point out, though, that, like, sometimes workers already has a variation of this implemented, works fantastic. We had users comment on it, it’s not a hypothetical case. There are real use cases for this, in the wild. Having this formally supported would be fantastic.
JRL: Okay. And a couple more + 1s from myself and Mateo. Daniel Minor next
DLM: Support this. I wanted to mention that in terms of implementing this, HTML spec is important. Import bytes has a draft which will be implemented right now and similar to this. I want to highlight the importance of that.
EAO: Yeah. Absolutely. If and once this is advanced, it makes sense where there should be one or two HTML spec PRs. I haven’t done anything in that space because I don’t want to do any work that somebody else is doing and I can just copy.
JRL: Next up, Christian.
CHU: Yeah. I also want to explicitly express support because I think this paves the way to get away from microDLS—using for ages like, like, putting something in import and doing things which—yeah. So that’s a good idea.
JRL: Okay. Shane?
SFC: Yeah. I just wanted to clarify that this import text works based on either the—the explicit type option, for text, or based on the file extension because I definitely—I think there’s definitely cases where we add more file types that we don’t currently have. And those might want to use a different—loaded not as strings.
EAO: Nothing in the spec text looks at the URL which the import is done. The extension is completely in order of the current spec you say for implementation to do all sorts of things. So doing an import as text-based on an extension is something that is allowed, this proposal does not propose to do anything about it.
SFC: Cool
EAO: Using .TXT in the example is meant to be purely exempt leader
SFC: In order to get the behavior, you must have the with type text. If you are using an ESM style import statement
EAO: If you are using an ESM-style import and use with type text, then you will gettext. This proposal and the speck does not mandate what happens when you don’t.
SFC: Okay.
JRL: Okay. That’s it for the queue. We had—several messages of support.
EAO: I think it means I might have Stage 1.
JRL: Yeah. Any objection to Stage 1? Okay. You have Stage 1.
JRL: Next question: do you want to go straight to Stage 2?
EAO: I would like to ask for Stage 2.
JRL: Do you want to skip the 2.7?
EAO: Can’t do that because we need to do a review. That’s my next slide.
JRL: Got it. We need—okay. Okay.
JRL: So support for Stage 2? Sorry. I did not see that. Jake?
JAD: On the web side of this for JSON imports we require JSON mine type for it to work. What do you see it being for text? Just any mime type is fine
EAO: I would say any mime type
JAD: I would agree
JRL: That’s an HTML question?
JAD: Yeah.
JRL: Okay. We have James Snell on—to support Stage 2. Also, CDA supports—“but I have not looked at the spec. Sounds good to me if acceptance criteria have been met”
EAO: [points at screen] CDA you have now looked at the spec. This is the entirety of the change here. We add a create text module abstract operation, which takes an argument source and returns a normal completion; it performs the following steps when called return. Create default exports in the module of source. And it is used in the place where currently you check for type being JSON, then a specific thing. Rather than just doing that, we do the thing with JSON here. The same text. But if it’s type text then we perform the finish loading imported module, the result is… the only change compared to the JSON is create text module then parse JSON module which does the thing which is a subset—JSON module is parsing the JSON and then you get the module around the JSON. This doesn’t do that. It gives you the string from which the JSON would have been parsed. And then, we add a bit to note here saying that because the note is currently only about type JSON, but this is also about type text.
JRL: Okay. Jordan, do you want to reply
JHD: I support Stage 2 and I review the spec text. I am marked as a reviewer, I can mark that too for 2.7, which also support
JRL: Nicolo?
NRO: I also reviewed the proposal ready a few weeks ago. And I mean, it looks fine. It’s the same spec text and one comment, but they already handled it.
JRL: Okay. So we had support for Stage 2. There’s a new question. Guy?
GB: Yes. Sorry. Just going through, I was thinking, any guarantee around the proper formness of the string? Can we have an unwell formed string in JavaScript?
EAO: Yes. I don’t think any of the spec text prohibits that being the result of this thing. There’s stuff in the HTML spec, and in the behavior of any rational implementation, but effectively, if you could wrap that thing in double quotes and you know, pass it in with type JSON, it would probably work with this. The broken thing that you can think of.
JRL: Michael with the reply.
MF: You are saying, only supporting UTF-8 in text
EAO: Not literally in our spec
JHD: The spec as written, for both JSON and text modules, normatively there’s an assert that a string is a passed, which I believe like its encoding—encoding of that is specified already. And so I think it is sort of indirectly already locked down. I don’t know if that means it’s UTF-8.
MF: JavaScript string is a sequence of UTF-16 code units
GB: No. I was finished on that topic. It was an important detail to clarify
JRL: Another reply from the Nicolo
NRO: We talk about this with fetch people. And like web people and both text the code and fetch response to JSON already behave the same. Not response.text(). These imports follow whatever that is on the web.
JRL: Skipping Jordan. Nicolo. Guy? Yes. Shane, I think Okay. And I screwed it up. Shane, it’s you. I don’t know what you were talking about
SFC: Yeah. My question was, also on the character encoding thing which is, there is definitely still very much a real platform out there that don’t use UFT-8 for files .My understanding is that the module loading infrastructure figures this out for you and loads files with the correct encoding such that when they get to be strings then the strings are well formed JavaScript strings. But it also raises the question about whether or not the option should allow for reading files of different files—of different encoding types and whether that should be explicitly part of the proposal or not. If it does what I just described, is—is appropriate. But I wonder if it is also important to specify the encoding of the file at the call site.
NRO: Can I answer this? So responsive text defaults regardless, like you cannot—you don’t have to say UTF-8. It’s assumed. There is actually a way when fetching, using the content, something like that, it should have a response. When asked to fetch people, we want to assume UFT-8 and ignore the header and force it to be UTF-8. You failed to use it manually to call it. At least the web has a strong preference for the same text UTF-8 at least in API.
JRL: Okay. And WH?
WH: If I understand this correctly, this can import arbitrary strings, right?
EAO: Yes.
WH: Yes. So here is the issue, JSON is not arbitrary strings; it has a specific structure. So it’s fairly easy to guess if you just look at its first few bytes whether it’s in UTF-8 or UTF-16 big or little endian. I don’t know if any implementations guess or not, maybe some do? But if it’s just arbitrary text, you can still guess but it’s much easier to guess wrong. And that can have some interesting consequences.
JRL: Okay. That’s everything on the queue now.
EAO: Would anyone object to Stage 2?
JRL: Okay. We have had several supporters. There are no objections to Stage 2. I think you have Stage 2.
EAO: Excellent. Could I have a couple of reviewers?
JRL: Jordan. Nicolo first An answer to WH. On the web when you—
JRL: Point of order. We need note-takers if we are going to discuss
NRO: An answer to WH's question: On the web when to import JSON, it assumes that the bytes are received in UTF-8. It doesn’t do anything else. It parses the bytes as UTF-8
JRL: Before we move onto the next point, we need to assign Stage 2 reviewers. JHD and NRO, you have reviewed. Can I volunteer to help you two?
JRL: Yes to both.
JRL: Excellent. Thank you.
JRL: Your Stage 2 reviewers. Jordan and Nicolo
EAO: At Stage 2, I need editor review as well. MF. Thank you.
JRL: CZW?
CZW: So I think—I am not sure if this has been mentioned, but I think the current spec doesn’t specify which encoding and it can be defined by the host, and the host can use whatever they use to determine the encoding for the text.
EAO: Pretty much, yeah.
JRL: Nicolo?
NRO: We also don’t define encoding for JSON modules
JRL: Shane
SFC: Yeah. This proposal exposes this encoding problem much more directly to users of JavaScript than any of the other module import types. Import bytes imports the bytes. Right? And then import JSON, import the JSON. Right? And the encoding is, like, JSON has specific requirements about how encoding works to have a well-formed JSON file. But for text files, this problem is a little bit more pertinent, and I think that I would like to—I think that it’s worthwhile, like, investigating whether the assumptions that have been made for previous types of imports still apply in the same way they do here.
SFC: So I think that it would be good to—I know that a lot of the people on my team, for example, care a lot about text and file encoding and how they work in different programming languages, and I haven’t had the chance to review this. It would be good to spend time investigating that.
JRL: Okay. And then a couple of replies. Michael?
MF: I disagree. I don’t see how it’s different from JSON import. It has to be decoded as text before it’s then parsed as JSON. The—and there’s a lot of focus on the encoding here, but our role here is just as glue code basically. We don’t do the decoding. This is up to the platform. The platform can decide its encoding however it wants. If it’s a web browser, it can base it on a Content-Type header. The underlying system can make whatever choice there is. We are overly concerned about things that are not within our domain here.
JRL: And next is me. I also agree. Let’s assume it’s UTF-8. If it’s not you can use import byte to decode in the exact representation you want.
JRL: Sorry. I am agreeing we should not do UTF-8 and adding that if you don’t want UTF-8 there is an escape patch for you.
EAO: Wait. What? You say, we don't—could you clarify your previous statement?
JRL: Import text can assume it is UTF-8. It should because that’s what the web platform does. If you want something else, you have import bytes, and that gives you the bytes, and you can transform those bytes into anything you want
EAO: To clarify, you are happy with the JavaScript spec not mentioning any of that
JRL: Yeah.
EAO: Cool
JRL: Shane?
SFC: Yeah. I assume that what happens when you—if you were to pass a, say, a big-endian UT16 encoded JSON file, and you load it in using import type JSON, then what happens now? If the engine assumes that it’s a UTF-8 file, that fails to parse. And you’re safe. Those errors are allowed and early. Whereas, if you were to pass a UTF-16 big-endian encoded text file into import text, then what you end up getting in those environments is not a failure, instead something with a bunch of, like, replacement characters or something. It’s much less loud if there is actually a problem with the file encoding. Which makes me somewhat concerned.
JAD: So your JSON file could be like quotes, that start and end. And anything in the middle could be like what means certain characters could be another characters and encoding and you get out a string with unexpected characters in it.
JAD: Like it wouldn’t have to be an actual failure. Right?
SFC: The string quote is not a valid UTF-8 syntax character so the JSON has invalid—The quote for the quote is not the same because there’s a null byte involved
EAO: What Jake is considering is situations when dealing with encoding that is not like UTF16. But is one of the older encodings that looks like UTF-8—
JRL: Okay. So to begin with we have a point of order of less than 5 minutes remaining. Up next a clarifying question from Keith.
KM: That same thing with, like, our encoding being for any other WebAPI or I guess probably node any API that loads from a type if the bytes are 16 and get random garbage?
EAO: Yes.
KM: Okay.
MAH: So I agree with Michael. The encoding is not in the realm of the JavaScript spec. It’s a host concern. However, like, it decides to fetch and the bytes and interpret them is not something control. If it’s not matching, if what—if it is not doing its job properly, that’s not something we can control one way or another.
JRL: Michael, you are on the queue, if you like
MF: SFC brought up at the beginning of the topic that the Unicode Consortium should get involved in a review. Has this now convinced you? Are we getting back to agreement that that is outside of the scope of this proposal?
SFC: I haven’t had time to think through this fully.
MF: Okay. Okay. I wanted to see whether that was still being considered.
JRL: Andreu
ABO: About what should be mentioned about UTF16. It can be an issue. But for everything that the specs are doing with regards to encoding, even response.text() they are checking whether the actual byte, with a byte mark, and that is UTF—hmm. I am not actually sure right now. I believe it is a UTF16 byte or mark—hmm. I thought I knew this. But it’s, like, this kind of thing is at the very least checked. And it is possible that response checks could in some cases—like, if there is a—like actually decoded at UTF16. It is in any case for UTF16, like files that have a byte that is very full—because the byte order will only get stripped if it is—like if you are decoding UTF-8, it’s only stripped if it’s UTF-8. You have two—I think one character that is not at the beginning of the file. And depending on how you parsing that, that might be enough to realize that there is a problem early on.
JRL: There is a reply to that from Nicolo. Before that we have 1 minute left.
NRO: Most APIs don’t—maybe HTML. For JavaScript models we don’t check it. If it’s a byte, it shows up as a character
ABO: If it’s UTF-8 byte order mark it is stripped.
JRL: Okay. So a little bit of disagreement. Eemeli?
EAO: I think given the ongoing conversation about some of the details here that Shane has mentioned, I would like to express interest in having a continuation later in this meeting, if there’s time possibly for a short continuation of 5 or 10 minutes to settle to reach 2.7 here
CDA: We can go to the half-hour mark if you want to use the next few minutes
EAO: Shane, do you think that your concerns might get settled in the next 5 minutes, or would it be better to have 5 minutes later after other off-line talks?
SFC: I would very much appreciate having a continuation on Thursday. Because yeah. I was—the scope of this—the scope of this proposal was a little bit more than I had anticipated ahead of time. So I didn’t spend a lot of time before this meeting reading the proposal in a great level of detail because I didn’t realize it had Intl. impacts and I would like time to do that because that’s why I am on this panel.
EAO: If there’s no time slot for continuation, I am okay with that. It’s got Stage 2. I have reviewers who seem like they are ready to approve this for 2.7. So I am pretty happy about this. Thank you, everyone.
SFC: Also, just for the record, also, like, I am very much in support of the problem case this is solving and the general approach that Eemeli put forth which I very much support Stage 1 and 2. In terms of 2.7, I would like to look into more detail.
JRL: Okay. So last one in the queue. Nothing else.
JRL: Of course now.
EAO: Mathieu?
MAH: If the only thing delaying right now is Shane having time is maybe you could ask for a conditional 2.7 Shane has had time
EAO: Yes, please.
EAO: I would like to ask for that.
CDA: Just be explicit about what the condition is?
EAO: I am asking for conditional 2.7 at this meeting provided that we get an okay from SFC at some point later in this meeting that he’s okay with 2.7. If he doesn’t give that okay this stays at 2 and return to the topic at a later meeting
JRL: Okay. That seems fine to me. Any objections?
JRL: Okay. So conditional 2.7. Congratulations.
Speaker's Summary of Key Points
- In a similar manner to why importing JSON or raw bytes is useful in JavaScript, importing text is useful, and should be just as easy.
- The preferred solution for this is to add support for type: 'text' in import attributes.
- This proposal is a minimal change, riding on the coattails of the Import Bytes proposal.
Conclusion
- The proposal received wide support. Most of the discussion focused on the role of the JS spec in defining or considering the encoding of the imported text.
- Support was given for Stage 1 and Stage 2, with JHD and NRO as reviewers.
- Conditional Stage 2.7 support was received with reviewer and editor approval, pending on a confirmation from SFC that no encoding concerns remain.
TypedArray Concatenation
Presenter: James Snell (JSL)
JSL: Okay. Yes. So I am—this is for Stage 1 consideration. I am not going to go through every slide because you can go through it. It’s more background of why I want to go here. But basically looking for, problem statement here, we should provide a method of condition cat naturing TypedArrays that enables, that is the key word implementations to optimize performance in some way.
JSL: I will point out, I use language like zero-copy and lazy copy and talk about ropes and stuff in here. None that have is actually essential at this point. We want to talk about the problem space, not how it’s implemented.
JSL: But really, this is a long-standing problem. If we look at nodes, nodes had buffer class for a while. It has this buffer concat. We can take two of them. Get back a third one. These things concatenated together. In the language we have set. Already there. We will look at the definition of this in just a few minutes. But it does allow us to kind of concatenate these things. Right? You can allocate the third one to the total length and set that in there. It’s used fairly commonly for concatenation in the web. But still most of the cases you will find out, if you go out and search, is using buffer.concat in node or one of the polyfills or other runtimes. This is kind of the fall back. All this set is becoming a lot more common to find.
JSL: Things that don’t work of course, you know, like array of from—now this is terribly slow using this. But I have seen this in the wild which was scary. Motivating use case. WritableStream is one of the big ones. It’s very common for WritableStream implementations to do this kind of accumulation and then once it’s accumulated a certain amount of data forward that alone. One of the challenges with WritableStream is it’s not—it’s never clear when given one of these. What type of stuff does it accept? Right? The API itself will just take any arbitrary value, it might be bytes, it might be strings, objects, numbers, it doesn’t matter.
JSL: All you know is that you can write something to it. You never know what is going to be accepted. And one of the common patterns is to either concatenate like this is doing, incrementally as you go or to collect everything into an array and pass the array down. Unfortunately, you never know if something down the pipeline will accept that array or not. Typically, when it accepts bytes, it accepts one thing. A buffer or a TypedArray of some kind. The key challenge with this is, this gets really expensive in a lot of pipelines. If you look at service rendering pipelines a bunch of things this pattern ends up expensive and one of my goals, one of my hopes is we come up with a lower cost concatenation. That defers the actual assemble of these pieces, to when they are actually identified. You can end up accumulating these things down to the point where you use the data similar to concatenating strings. There’s other use cases here. By the datagram. I won’t go through all of this. It’s there in the slides. The proposal and again I am not married to any of the syntax. It’s to add a concat operation. It can be done by adding a new API or we can actually modify the set. Set right now, does not account for a host optimized concatenation. If you look at the steps, you know, you go down to that thing and walk-through each individual member of it.
JSL: So it really kind of unwinds that. And it doesn’t really have a hook in there at all for kind after the HostDefined implementation of how this works. We had a host implementation hook, it could say, I am not going to copy this right now. It’s not the way the spec is written. The way this looks in the writable steam case, basically, eliminating the boilerplate. If the condition cat is deferred, getting back a thing that is stale TypedArray: but you know, from the user’s point of view, eliminate the code or buffer or whatever. From the runtime's perspective, we have an opportunity to actually optimize in some way which we don’t really have now.
JSL: And that’s basically it. I mean, there’s some considerations here. Depending on how we do this, is it an immutable array or mutable. Sizable? Some details to work through. Mixed types. You know, concat a Uint. None of the use cases try to mix them. It’s fine if we fine to say, Uint arrays the same type. But again all of this is just implementation detail to get into later.
JSL: Some prior art. If you go look at NPM, there are modules like BufferList. They kind of do this pattern. BufferList will create, allow you to accumulate and get back things that act like a buffer. 43 million downloads a week. It’s still very widely used. This is a very common pattern. I would like to see it moved into the language. That’s the preamble. There’s not much to it from there. We can go to the queue, if anyone is on the queue. No?
MAH: I support Stage 1. I want users to have a way to express their intent, what they want to do, which is putting multiple buffers that they have accumulated together. And you lose that intent in the way you have to do it right now, which is to allocate a new buffer and copy the bytes into it. Everything else is optimization and non observable behavior that the spec is not concerned with.
JRL: Okay. And Gus?
GCL: Yeah. I also support Stage 1 for this. I think it's a very, very useful utility that could provide some good performance gains in practice.
JRL: Olivier?
OFR: Yeah. I just want to add, since—yeah. I find motivation good just because you added it as the first sentence in your proposal, repository. This will be zero-copy, I—yeah. I really don’t want to guarantee that. So I don’t see—yeah. I don’t see a zero-copy version of this appearing in engines any time soon, to be honest.
JSL: And that’s fine. The language in the repo there and in here is just kind of aspirational, considering me put subliminal messages in there
OFR: There’s a bit of a style that—I mean, it potentially unlocks this. But at the same time, I find it problematic when proposals are motivated with such a claim or at least at—such a claim is added to the proposal. When it’s quite a considerable implementation effort to go down that road.
JSL: That’s fine. Any—moving forward as we progress, that language will be removed so it’s not assumed up front.
JRL: Before we move on, is yours a reply to the current topic? About ropes? I assume so.
YSZ: Maybe. I have one question or comment basically, V8 SpiderMonkey JavaScript, everyone is having issue… like recreate very large storage undertaking or small rope and it ends like something is taking a very large storing forever with something and like the memory is considerable or harder to find what is—given this substring is try to be mentioned in registering. So I kind of like the feeling like a rope or this thing is a bit hard to manage in terms of memory.size, life cycle. Express the data structure like—rope or something much more explicit, instead of hiding behind is a TypedArray
JSL: To respond on that, at this point, my concern is more about being able to have the ability to optimize in the language rather than necessarily worrying about how it’s optimized. Right now the hook is not there to allow us to optimize and that’s what I want to make sure is there.
OFR: Yeah. So one thing to add, I think it is actually possible—there is potential to optimize here because we can do the concatenation at once. We can allocate the correct buffer size and put everything in. The proposal brings potential to optimize this. However, ropes have annoying performance cliffs and we see those in strings all the time. And we don’t—we don’t like actually that we have ropes in strings that much. Because, for example, if you do something like index or a find or whatnot, and they also tend to be very imbalanced in practice and so actually, it causes a lot of pain.
JSL: Yeah. It’s taken me a year to work up the courage to bring this proposal specifically for that reason, I don’t want to go down that same path but you want the optimization path. Daniel?
DLM: We share of the the implementation concerns that have already been raised
JSL: WH?
WH: I support Stage 1. I also have a question: So far the discussion has been about just using a lazy concatenation strategy. I am curious if something like doubling of the buffer size would be acceptable here?
JSL: I think so.
WH: Okay. So we will just leave it to implementations to define how it’s done, as long as it’s reasonably efficient?
JSL: Yeah. That’s where I am at right now.
WH: Thank you
MAH: Yeah. So these are not Stage 1 concerns. But reading this, we have a few observations that are Stage 2 scope. Concat is a confusing name. Concat is an instance method on strings and this presentation here and what node does is for a static method on the Buffer constructor. Also We want to figure out how to create a concatenation of immutable ArrayBuffers that is born immutable so that you don’t need to do a transferToImmutable step. But all those things are Stage 2.
JHD: is there a proposal repo? Stage 1 does require one. It can be conditional based on adding the repo though.
JSL: Yeah. Let’s do that.
JRL: So yeah. Okay. Leave it—conditional Stage 1. So, do we have support for conditional Stage 1? We got it from WH. Anyone else? Jordan? Sorry MAH too. Also, Chris supports. Okay. Are there any objections? Without objections you have conditional Stage 1. Please make a repo.
JSL: Okay. Thank you.
Speaker's Summary of Key Points
- Concatenating TypedArrays is a very common use case, particularly in uses of WritableStreams, and others. It is, however, fairly difficult to optimize for. Developers need to either manually allocate and copy using
TypedArray.prototype.setwith manual offset calculation, multiple allocations, etc, all of which makes it slower, or they need to rely on non-standard APIs like Node.js'Buffer.concat. - The problem statement is essentially: We should provide an optimizable way of concatenating multiple
TypedArrayinstances in a single operation that affords implementations the opportunity to optimize.
Conclusion
- Conditional stage 1 was accepting pending the creation of the github repo for the proposal.
TypedArray Find Within
Presenter: James Snell (JSL)
JSL: This one is equally simple. We have indexOf. We look for individual… you know bytes or elements within that. It is very common though to look for subconsequences. Node supports this in the buffer. Buffer actually overwrites indexOf to support searching for subsequences. We find this is—searching for subsequences in a lot of web apps, I think like Next.js does it. Quite a few others. And it’s really best to provide a method for searching for a subsequence within a TypedArray. That’s the problem statement.
JSL: I have really no syntax suggestions on this. No, you know, I am coming with how this is implemented. It’s how I would have imagined, an implementation defined mechanism. The whole point is just find the location of the subsequence or determine if the subsequence is there. It’s like a find and a contains. One that returns an index, the other returns a Boolean. And ask is Stage 1 conditional again. I need a repo. But yeah. Super straightforward.
JSL: Not seeing anyone on the queue.
JRL: Okay. Give it a second.
MAH: So… I think our main observation is there is kind of—it’s all about keeping the language consistent. And finding—so right now, we have indexOf on the string method. We have included an array. If we go with something that has slightly different syntax again, should we introduce that slightly different syntax array or iterator? So again, it’s not really Stage 1 concern. Because this is the—the problem statement we’re supportive. But we want to be careful about, like, how far we name things and the exact semantics because it overlaps with strings because it’s a multielement match really and array are single element find. So how to—not get developers confused really.
JSL: That’s an excellent point. I would be absolutely fine if we defined this in general terms. As part of the iterator protocol, basically find a subsequence within the iterator. So…
JRL: Up next is WH.
WH: My point is very similar to the one that was just made. I fully support exploring the problem space, curious what the API will look like. I hope I don’t have to learn another API different from all the existing ones. I also hope I can do things like search backwards.
JSL: And then Keith?
KM: If you want it to be—I guess on the iterator topic, the iterator version is far less likely to give you the purview you want. Iterator protocols are super complicated in JavaScript and you need to do a lot of herculean effort and they are a lot of work and have to be done separately. In a general way, it’s very specific to exactly one case and it’s not going to fall off a cliff if you use it in some other way and it’s better to avoid that. Unless, like, I mean maybe I can imagine some hypothetical future, all the engines have optimizers, iterators and clean abstraction for it. As it stands today, it’s—many years until the place if it ever happens. I guess my two cents is—do not do the iterator version and add as a separate thing and not recommend it. But…
JSL: Okay. You have a + 1, but I am not sure what you are + 1ing
JRL: The iterator protocol
JSL: And Chris, support for Stage 1 and I am—assuming that’s conditional Stage 1.
JRL: Okay. So you are asking for conditional Stage 1 on creating a repo?
JSL: Yeah.
JRL: We have support from Chris. Anyone else? JHD, MAH
JRL: Okay. Any objections—
WH: I support this.
JRL: WH, thank you. Are there any objections to conditional Stage 1?
JSL: All right. Congratulations. That was easy. Thank you.
JRL: Okay. We have 10 minutes left. Is there anything short to discuss?
Speaker's Summary of Key Points
- While
TypedArrayhas provided the ability to search for individual elements using indexOf, a common case in the ecosystem is searching for a sequence of elements (location) or determining if theTypedArraycontains a subsequence of elements (predicate). This is often achieved using slow polyfills that are difficult to optimize. The proposal is that the language should provide a sub-sequence ("find within") search that can be optimized by implementers.
Conclusion
- Stage 1 is accepted pending the creation of the github repo
TypedArray byteOffset Mistake
Presenter: James Snell (JSL)
- no proposal
- slides
JSL: I wasn’t going to bring this up. I throw in the slides to have it there. I was going to bring it up in a later meeting. ByteOffset is a problem. I see this in practice a lot with subarray. Basically a mistake that is common with a lot of developers. It creates the larger subarray. The large array, the developer takes a subarray off of it and completely forgets to check byteOffset and byteLength on those. On the underlying ArrayBuffer. And tend to, you know, like to completely forget it’s there. It would be nice again—don’t worry about the syntax. It’s nice to have an option at some point to still take a subarray view of that thing but have the ByteOffset set to 0. So this mistake can be avoided or you can actually just, like, ignore the fact that this is actually a view on a larger view
JRL: Can you make it larger
JSL: I can try. Hold on.
JRL: If you just hit the slide show, does it not present the slide?
JSL: Let’s see. It does, but I don’t make it larger
JSL: On the larger, you create a Uint8 subarray off it to create a view and pass the view off to something else that it doesn’t realize it needs the check ByteOffsets. A very common problem. It would be nice to create those views so it’s not obvious it’s a view. And if somebody forgets to check that, it still does the right thing.
MAH: This sounds somewhat related to Jack’s proposal about limited views. Basically, constructing a view for which you cannot get back the wider ArrayBuffer. I don’t know. There may be—I know a lot of that proposal got subsumed by the Immutable ArrayBuffer proposal. But there were still a lot of other things that the proposal had and this seems like one of them. So maybe it’s still motivated to keep exploring that.
JSL: So I was not prepared to ask for Stage 1 for this. But if there is another support for it—
JSL: Nice. Okay.
JSL: Then we can go from there.
JSL: Okay. Perfect.
RPR: Nothing else on the queue. So that wraps it up. Thank you, everyone.
Speaker's Summary of Key Points
- This was an unplanned discussion given we had a few extra minutes. The byteOffset property on TypedArrays continues to be tricky for users who often forget they need to account for it in case the
TypedArrayis a view on a larger ArrayBuffer. It would be nice to have an option to effectively hide that such that they don't have to.
Conclusion
- There is an existing proposal that can be progressed to address it. See https://github.com/tc39/proposal-immutable-arraybuffer