JavaScript Micro Performance Testing, History, and Limitations
I think performance optimization interests many developers as they learn more about different ways to accomplish a task. Some internal voice asks, "Which way is best?" While there are many shifting metrics for "best", like Douglas Crockford's 2008 JavaScript: The Good Parts, performance is accessible because we can test it ourselves.
However, testing and proving performance are not always easy to get right.
A Bit of History
Browser Wars
By the early 2000s, Internet Explorer had won the first browser wars. IE was even the default browser on Macs for a while. Once-dominant Netscape was sold to AOL and eventually shut down. Their spin-off Mozilla was in a years-long beta for their new standalone browser Phoenix Firebird Firefox.
In 2003 Opera 7 came out with Presto, a new, faster rendering engine. Also Apple released Safari, a performance-focused browser for Macs built on the little-known Konqueror KHTML engine. Firefox officially launched in 2004. Microsoft released IE 7 in 2006, and Opera 9 released a faster JavaScript engine. 2007 brought Safari on both Windows and the new iPhone. 2008 saw Google Chrome and the Android browser.
With more browsers and more platforms, performance was a key part of this period. New browser versions regularly announced they were the new fastest browser. Benchmarks like Apple's SunSpider and Mozilla's Kraken were frequently cited in releases and Google maintained their own Octane test suite. In 2010 the Chrome team even made a series of "speed test" experiments to demonstrate the performance of the browser.
High Performance JavaScript
Micro Performance testing saw a lot of attention in the 2010s. The Web was shifting from limited on-page interactivity to full client-side Single Page Applications. Books like Nicholas Zakas's 2010 High Performance JavaScript demonstrated how seemingly small design choices and coding practices could have meaningful performance impacts.
Constant Change
Before long the JavaScript engine competition was addressing some of these key performance concerns in High Performance JavaScript, and the rapid changes in the engines made it difficult to know what was best right now. With new browser versions and mobile devices all around, micro performance testing was a hot topic. By 2015, the now-closed performance testing site jsperf.com was so popular it started having its own performance issues due to spamming.
Test The Right Thing
With JavaScript engines evolving, it was easy to write tests, but hard to make sure your tests were fair or even valid. If your tests consumed a lot of memory, later tests might see delays from garbage collection. Was setup time counted or excluded from all tests? Were the tests even producing the same output? Did the context of the test matter? If we tested !~arr.indexOf(val) vs arr.indexOf(val) === -1 did it make a difference if we were just running the expression or consuming it in an if condition?
Compiler Optimization
As the script interpreters were replaced with various compilers, we started to see some of the benefits — and side-effects — of compiled code: optimizations. Code running in a loop that has no side-effects, for instance, might be optimized out completely.
// Testing the speed of different comparison operators for (let i = 0; i < 10000; i += 1) { a === 10; }
Because this is performing an operation 10000 times with no output or side effects, optimization could discard it completely. It wasn't a guarantee, though.
Moving Targets
Also, micro-optimizations can change significantly from release to release. The unfortunate shuttering of jsperf.com meant millions of historical test comparisons over different browser versions were lost, but this is still something we can see today over time.
It's important to keep in mind that micro-optimization performance testing comes with a lot caveats.
As performance improvements started to level off, we saw test results bounce around. Part of this was improvements in the engines, but we also saw engines optimizing code for common patterns. Even if better-coded solutions existed, there was a real benefit to users in optimizing common code patterns rather than expecting every site to make changes.
Shifting Landscape
Worse than the shifting browser performance, 2018 saw changes to the accuracy and precision of timers to mitigate speculative execution attacks like Spectre and Meltdown. I wrote a separate article about these timing issues, if that interests you.
Split Focus
To complicate matters, do you test and optimize for the latest browser, or your project's lowest supported browser? Similarly, as smartphones gained popularity, handheld devices with significantly less processing power became important considerations. Knowing where to allocate your time for the best results – or most impactful results – became even more difficult.
Premature Optimization?
Premature optimization is the root of all evil.
-- Donald Knuth
This gets quoted frequently. People use it to suggest that whenever we think about optimization, we are probably wasting time and making our code worse for the sake of an imaginary or insignificant gain. This is probably true in many cases. But there is more to the quote:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
The more complete quote adds critical context. We can spend a lot of time on small efficiencies if we allow ourselves to do so. This often takes time from the goal of the project without providing much value.
Diminishing Returns
I personally spent a lot of time on these optimizations, and in the moment it didn't seem like a waste. But in retrospect, it's not clear how much of that work was worthwhile. I'm sure some of the code I wrote back then shaved milliseconds off the execution time, but I couldn't really say if the time saved was important.
Google even talks about diminishing returns in their 2017 retirement of the Octane test suite. I strongly recommend reading this post for some great insight into limitations and problems in performance optimization that were experienced by teams dedicated to that work.
So how do we focus on that "critical 3%"?
Application not Operation
Understanding how and when the code is used helps us make better decisions about where to focus.
Tools Not Rules
It wasn't long before the performance increases and variations of new browsers started pushing us away from these kinds of micro-tests and into broader tools like flame charts.
If you have 30 minutes, I recommend this 2015 Chrome DevSummit presentation on the V8 engine. It talks about exactly these issues... that the browsers keep changing, and keeping up with those details can be difficult.
Using performance monitoring and analysis of your running application can help you quickly identify what parts of your code are running slowly or running frequently. This puts you in a great position to look at optimizations.
Focus
Using performance monitoring tools and libraries let you see how the code runs, and which parts need work. They also give us a chance to see if different areas need work on different platforms or browsers. Perhaps localStorage is much slower on a Chromebook with limited memory and eMMC storage. Perhaps you need to cache more information to combat slow or spotty cellular service. We can make guesses at what is wrong, but measuring is a much better solution.
If your customer base is large enough you might find benefit in Real User Monitoring (RUM) tools, that can potentially let you know what the actual customer experience is like. These are outside the scope of this article, but I have used them at several companies to understand the range of customer experience and focus efforts on real-world performance and error handling.
Alternatives
It's easy to dive into "how do I improve this thing", but that isn't always the best answer. You may save a lot of time by stepping back and asking, "Is this the right solution for this problem?"
Issues loading a very large list of elements on the DOM? Maybe a virtualized list where only the visible elements are loaded on the page would resolve the performance issue.
Performing many complex operations on the client? Would it be faster to calculate some or all of this on the server? Can some of the work be cached?
Taking a bigger step back: Is this the right user interface for this task? If you designed a dropdown to expect twenty entries and you now have three thousand, maybe you need a different component or experience for making a selection.
Good Enough?
With any performance work, there is a secondary question of "what is enough"? There's an excellent video from Matt Parker of Stand-up Maths talking about some code he wrote and how his community improved it from weeks of runtime to milliseconds. While it's incredible that such an optimization was possible, there's also a point for nearly all projects at which you reach "good enough".
Für ein Programm, das nur einmal ausgeführt wird, könnten Wochen akzeptabel sein, Stunden wären besser, aber wie viel Zeit Sie dafür aufwenden, wird schnell zu einem wichtigen Gesichtspunkt.
Man könnte es sich wie Toleranzen im Ingenieurwesen vorstellen. Wir haben ein Ziel und wir haben eine Bandbreite an Akzeptanz. Wir können nach Perfektion streben und gleichzeitig verstehen, dass Erfolg und Perfektion nicht dasselbe sind.
Leistungsziele identifizieren
Ziele sind ein entscheidender Teil der Optimierung. Wenn man nur weiß, dass der aktuelle Zustand schlecht ist, ist „es besser machen“ ein offenes Ziel. Ohne ein Ziel für Ihre Optimierungsreise können Sie Zeit damit verschwenden, mehr Leistung oder mehr Optimierung zu finden, obwohl Sie an etwas Wichtigerem arbeiten könnten.
Ich habe dafür keine gute Kennzahl, da die Leistungsoptimierung stark variieren kann, aber versuchen Sie, sich nicht im Unkraut zu verlieren. Hier geht es eigentlich mehr um Projektmanagement und Planung als um Codierungslösungen, aber der Input der Entwickler ist bei der Definition von Optimierungszielen wichtig. Wie im Abschnitt „Alternativen“ vorgeschlagen, lautet die Lösung möglicherweise nicht „schneller machen“.
Grenzen setzen
Im Fall von Matt Parker brauchte er irgendwann die Antwort und musste das Gerät für nichts anderes verwenden. In unserer Welt messen wir häufig die Besucherleistung und ihre wahrscheinlichen finanziellen Auswirkungen im Vergleich zur Entwickler-/Teamzeit und Ihren Opportunitätskosten. Die Maßnahme ist also nicht so einfach.
Stellen wir uns vor, wir wissen, dass eine Reduzierung unserer Add-to-Cart-Zeit um 50 % unser Einkommen um 10 % steigern würde, aber es wird zwei Monate dauern, bis diese Arbeit abgeschlossen ist. Gibt es etwas, das eine größere finanzielle Auswirkung haben könnte als zwei Monate Optimierungsarbeit? Können Sie in kürzerer Zeit einen Nutzen erzielen? Auch hier geht es um Projektmanagement und nicht um Code.
Komplexität isolieren
Wenn Sie feststellen, dass Sie Code optimieren müssen, ist es auch ein guter Zeitpunkt zu prüfen, ob Sie diesen Code von anderen Teilen Ihres Projekts trennen können. Wenn Sie wissen, dass Sie komplexe Optimierungen schreiben müssen, die es schwierig machen, dem Code zu folgen, kann das Extrahieren in ein Dienstprogramm oder eine Bibliothek die Wiederverwendung erleichtern und es Ihnen ermöglichen, diese Optimierung an einer Stelle zu aktualisieren, wenn sie sich im Laufe der Zeit ändern muss.
Abschluss
Leistung ist ein kompliziertes Thema mit vielen Wendungen. Wenn Sie nicht aufpassen, können Sie viel Energie für sehr wenig praktischen Nutzen investieren. Neugier kann ein guter Lehrer sein, aber sie führt nicht immer zu Ergebnissen. Es ist von Vorteil, mit der Codeleistung herumzuspielen, bietet aber auch die Möglichkeit, die tatsächlichen Ursachen der Langsamkeit in Ihrem Projekt zu analysieren und die verfügbaren Tools zu nutzen, um sie zu beheben.
Ressourcen
- Addy Osmani – Visualisierung der JS-Verarbeitung im Zeitverlauf mit DevTools Flame Charts
- Stand-Up-Mathematik – Jemand hat meinen Code um 40.832.277.770 % verbessert
- Titelbild erstellt mit Microsoft Copilot
The above is the detailed content of JavaScript Micro Performance Testing, History, and Limitations. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











JavaScript is the cornerstone of modern web development, and its main functions include event-driven programming, dynamic content generation and asynchronous programming. 1) Event-driven programming allows web pages to change dynamically according to user operations. 2) Dynamic content generation allows page content to be adjusted according to conditions. 3) Asynchronous programming ensures that the user interface is not blocked. JavaScript is widely used in web interaction, single-page application and server-side development, greatly improving the flexibility of user experience and cross-platform development.

The latest trends in JavaScript include the rise of TypeScript, the popularity of modern frameworks and libraries, and the application of WebAssembly. Future prospects cover more powerful type systems, the development of server-side JavaScript, the expansion of artificial intelligence and machine learning, and the potential of IoT and edge computing.

Different JavaScript engines have different effects when parsing and executing JavaScript code, because the implementation principles and optimization strategies of each engine differ. 1. Lexical analysis: convert source code into lexical unit. 2. Grammar analysis: Generate an abstract syntax tree. 3. Optimization and compilation: Generate machine code through the JIT compiler. 4. Execute: Run the machine code. V8 engine optimizes through instant compilation and hidden class, SpiderMonkey uses a type inference system, resulting in different performance performance on the same code.

JavaScript is the core language of modern web development and is widely used for its diversity and flexibility. 1) Front-end development: build dynamic web pages and single-page applications through DOM operations and modern frameworks (such as React, Vue.js, Angular). 2) Server-side development: Node.js uses a non-blocking I/O model to handle high concurrency and real-time applications. 3) Mobile and desktop application development: cross-platform development is realized through ReactNative and Electron to improve development efficiency.

Python is more suitable for beginners, with a smooth learning curve and concise syntax; JavaScript is suitable for front-end development, with a steep learning curve and flexible syntax. 1. Python syntax is intuitive and suitable for data science and back-end development. 2. JavaScript is flexible and widely used in front-end and server-side programming.

This article demonstrates frontend integration with a backend secured by Permit, building a functional EdTech SaaS application using Next.js. The frontend fetches user permissions to control UI visibility and ensures API requests adhere to role-base

The shift from C/C to JavaScript requires adapting to dynamic typing, garbage collection and asynchronous programming. 1) C/C is a statically typed language that requires manual memory management, while JavaScript is dynamically typed and garbage collection is automatically processed. 2) C/C needs to be compiled into machine code, while JavaScript is an interpreted language. 3) JavaScript introduces concepts such as closures, prototype chains and Promise, which enhances flexibility and asynchronous programming capabilities.

I built a functional multi-tenant SaaS application (an EdTech app) with your everyday tech tool and you can do the same. First, what’s a multi-tenant SaaS application? Multi-tenant SaaS applications let you serve multiple customers from a sing
