Mark Feffer: Welcome to PeopleTech, the podcast of the HCM Technology Report. I’m Mark Feffer.
Mark Feffer: This edition of PeopleTech is brought to you by ADP. Its Next Gen HCM is designed for how teams work, and helps you break down silos, improve engagement and performance, and create a culture of connectivity. Learn more at flowofwork.ADP.com.
Mark Feffer: Today, I’m speaking with John Marcantonio, head of platform evangelism and federated development at ADP Next Gen. We’re going to talk about low-code development, where it’s at, and what it could mean for HR and HR technology.
Mark Feffer: John, thanks for visiting.
Why is it that you, and someone in your position, would care about low-code development?
John Marcantonio: It’s a great question. I really view low code as the next frontier of software development. Actually, earlier in my career and my education, I was a computer scientist and definitely thrust into more traditional software development. Low code is a way to really not only increase efficiency of development, but broaden who can participate in that process and how great new ideas can get more rapidly developed and brought to market.
Mark Feffer: How do you define low-code development? What is it?
John Marcantonio: It’s a development methodology, where instead of, again, typing out code line by line, it’s more of a visual representation of the software development process. Instead of text, it’s visual blocks that represent app logic or data elements, things of that nature, where users can string together and create applications for either mobile, web, whatever the target is, within that type of drag-and-drop paradigm.
Mark Feffer: I’ve seen material that talks about it being applied in a lot of places around the organization. But how, in particular, do you think it can be used by HR?
John Marcantonio: I think HR [is in] actually, a rather unique position, as far as low-code development. I think of HR in the environment, just taking a step back, they’re oftentimes viewed as a cost center, obviously critical to the organization as far as managing people and talent. But compared to, let’s say, product development or sales and marketing, which are more revenue-driven, I think that oftentimes HR can sometimes struggle to really take ideas or initiatives they want to take as far as building new applications or integrating with systems or delivering new ways of engaging with the user base, because they’re fighting it out with the rest of the organization for resources or dollars or whatever it is.
John Marcantonio: Looking at low code, I think if we can broaden the base of who can modify or build applications, either as part of an organization or even HR themselves, I think it gives them a lot more control to not only consider these new ways of pushing forward people, operations and the HR function, but putting that control in their hands, really letting them go to town and potentially build up the skills and capabilities to make those ideas realities and continue to iterate without a lot of the overhead that exists in many other functions today.
Mark Feffer: When you think about HR practitioners, these are the people who still often rely on spreadsheets to do things. Do you think that HR practitioners are going to embrace this kind of thing or are companies going to have to nudge them along to adopt it?
John Marcantonio: I think there’s going to be a little bit of both. I think there’s certainly going to be a spectrum of individuals. Let’s say you’re right. Let’s say individuals more comfortable with the spreadsheet and more traditional models may have a little more of a leap, or may not be as comfortable with jumping into this type of technology or tool set. But I think there’s a lot of room, particularly for those up and coming, or maybe individuals more of like a business analyst or a broader background that are coming to HR organizations to embrace this type of flexibility, the ability to create applications or modify those things that can bring value to the organization.
John Marcantonio: I also think, too, it gives some more options to not only the practitioners themselves, but the broader organization, to think about who they can bring on to help augment that type of development. It could be the team. It could be a different class of consultancy or development shops that may not have the same overhead or cost structure, [as those that are doing] more heavy lifting, let’s say CRM or big marketing or rather costly external contractors that have a lot of expertise and knowledge. In a potentially low-code environment, you could have a broader pool of individuals that may not have to be as deep or as technical to execute the same level of value within those systems. I think the options definitely grow.
John Marcantonio: The last point in regards to practitioners, I’d say: I’d like to see a shift, not only in what can be delivered, but I would hope also that it broadens the horizons of HR organizations to think not only in terms of just the regular people operations or HR functions, but if they had the option to extend applications or create applications that can augment or improve existing processes, that becomes more of their day-to-day thought process. If they can make it better, great, then let’s think of those ideas. Let’s curate those. Let’s test them. If they can do that quickly and somewhat easily, the ideas become a virtuous cycle, where it’s like agile development. Let’s put it out there, let’s test it, let’s see what our users say. Let’s improve it and hopefully continue that from there.
Mark Feffer: What do you think the state of play is today? Are HR departments that you know of actually starting to do this? Are they starting to bring some of their tool development in-house with this kind of platform?
John Marcantonio: I think they’re starting to. I think there’s certainly been a lot of interest in even our own Next Gen HCM platform, as far as the possibilities of being able to modify and bring this type of rapid development iteration to market. If you asked me that question six months ago, I’d have one answer. Given the state of the world over the past few quarters with COVID and the rapid changes in work-from-home and different employee policies that have come to light, I think a lot of organizations have realized that the ability to shift rapidly and to provide information that’s very relevant now and really reach out to their employee bases is an incredibly powerful thing.
John Marcantonio: If you look at more traditional development models, if one wanted to go off and build, let’s say, an employee outreach mobile app that talked about work from home and new policies and updates that are going on, I’m sure the dust will settle from COVID by the time the contracts could be wrapped up. If you gave organizations the power to move very quickly and, again, roll that out, make it tailored to them, I think that’s a very compelling opportunity to really move the needle for their organization, help them stay ahead of things. Again, I think that mindset or those questions are starting to pop up more and more. The past few months have really brought it to light in a way that I don’t think many organizations have seen in the past.
Mark Feffer: One of the things that strikes me is the role of the HR practitioner has really been changing the last several years. They’re having to work more with data. They’re having to work more with mobile technology. Actually, all kinds of technology has become integral to what HR does. How does this impact the overall role of HR, the HR practitioner? Is there a real redefinition of the role going on?
John Marcantonio: I think there is. I think looking at the skillsets and, you’re right, what HR practitioners, organizations need to understand or be in tune with is definitely broadening. I think data was the first big step. We’re looking at what’s available in the organization. How do they leverage it to understand their talent base and where they need improvements or how to foster top performers is certainly more data-driven now that it’s available and they can start to rationalize it.
John Marcantonio: But then you take it one step further beyond, let’s say, very traditional, basic HCM systems, there’s certainly a lot more that can be done, can be tailored to the organization and make those practitioners that are more in tune to what are those possibilities or how do you, like you said, leverage things like mobile technologies to really integrate to the workflow.
John Marcantonio: I think it begs of the HR community to really start asking or thinking about those questions a little more deeply and seeing where they can start to improve and differentiate even in their own organizations. Not only for efficiency, but for employee satisfaction, their efficacy, just how they can react and continue to evolve within an ever-changing and increasingly changing landscape.
Mark Feffer: One of the things that strikes me about HR is they’re very concerned about how they’re perceived by people outside of the HR function. If they begin to do more coding on top of understanding data more, do you think other parts of the organization will start to look on HR in a different way?
John Marcantonio: I think they will. I think also it’s a balance, like any other organization. I think, again, a lot of peer groups in most companies—not only HR, again, sales, marketing, other ops, functions—I think it’s a balance on what’s the core value-add versus, like you said, what’s development, for example.
John Marcantonio: Obviously I don’t think any org wants to over-index with an HR team that’s just building software all day, even if it’s done in a low-code environment. But, again, I think it gives them a little more insight and flexibility in how to operate. I think every organization will face this mix in the future of what do they tool up or do in-house, what capabilities or expertise do they want their organization to manage versus just making it easier to bring in resources to augment their staff or to even take on these projects on their own in a way that’s not, again, massive or a big budget lift or involve a lot of overhead that you see in other projects.
John Marcantonio: I think you’re right. That’s going to have a trickle-down effect. Then as practitioners in HR orgs get more savvy, not only in data, but in the underlying technology, what’s available, I think it puts them in a better position. You can understand how does the whole ecosystem come together, how do all the moving parts come together. As individuals are discussing, again, big projects or integrations or building out new initiatives, I think it gives them more, not only perspective, but I think also credibility.
Mark Feffer: It seems like over the last few years, HR has been working pretty hard to develop a good relationship with IT, especially as HR technology becomes more sophisticated and more grounded in the company. How will low-code development, do you think, impact the relationship between IT and HR?
John Marcantonio: I think it’s going to get deeper. I think there’s two sides of the story. On the one hand, I think there’ll be a continued partnership as far as just the nuts and bolts of what IT would need to do to support, let’s say, a low-code environment. There’s user management and security provisioning, and, just like any other system, to ensure that they get up and running and it’s maintained from a, let’s say, network infrastructure perspective. Things of that nature. I think that gives IT something else to get their skillset on, work with, gives HR a little more visibility to how they operate and what’s needed there.
John Marcantonio: The flip side I can see as well, and this is something that all orgs are, I think, going through as they evolve through this process, is the more control or flexibility you give to any team, let’s say HR in this case, in a low-code environment, [the more] they’re building or modifying applications, particularly ones that may integrate with other systems within the company. It begs the questions of where’s the checks and balances and how do we make sure that everything is at a proper quality level and that whatever is being rolled out works consistently? If something goes wrong, who’s the one on the other side of the phone to get that question?
John Marcantonio: I think as much as it’ll be excitement and another area of skillset and onboardings for IT to handle, I think there’s that flip side is just another area of management, if you will, that will need to get sorted out in a very different way. Because, again, a lot of this is not only the development process, but you think about all the operational components. How do these apps get tested? How do they get deployed? Where do they get deployed to? Are there different integration or policy considerations that go along with it? These are things that I think every company is starting to revisit and understand. Does that follow the current book? Is there new rules they have to adhere to? Then they’ll make that call from there. But overall I think there’ll be a tighter relationship as more of this comes in focus.
Mark Feffer: Do you think that HR is positioned to be a first mover with this within the organization? When data started to rise, I think a lot of HR departments found themselves unexpectedly being some of the first users of different types of analytics. I’m wondering if you think the same kind of thing might happen here or do you think they might follow?
John Marcantonio: I think it will depend on the organization, but I think in many cases they do have the opportunity to lead, mostly because many of the orgs in the company, again, for product development or the other operational groups, they have an established process. They have either internal resources they’re leveraging, they have contractor bases, they have a rhythm of what they’re building and maintaining. The machine is up and running, if you will.
John Marcantonio: In HR’s case they’re always, let’s say, potentially fighting for resources or it’s much more difficult to get projects staffed and built out. If this gives them an opportunity to do that, either with their resources or, again, a much more streamlined, external set of resources, again, they take a little more control for themselves. I think that control will start to lead to some experimentation. That experimentation will lead to results that can be evaluated and iterated on.
John Marcantonio: I don’t expect every org is just going to jump in with both feet and say, “Okay, every HR member is now a developer. Go modify everything under the sun.” But I think as these tools become available and the opportunity comes up, I think there’ll be room for experimentation and hopefully, too, I think, these teams will wade into the waters a little bit, saying, “Okay, what if we built this one little widget?”
John Marcantonio: The COVID scenario is a great example. It’s like, “What if we just built a little app that allowed us to talk about work from home policies and link out to what we need? Great. Build it. Push it out. See what happens.” It doesn’t have to be a monumental undertaking, but with relatively low cost and risk, really standalone, isolated applications could be built, give the team some experience and confidence and use that as an test for the next.
John Marcantonio: If we live up to the promise of low code, say citizen developer, relatively low barrier to entry in development, I think it gives the opportunity for those HR teams to jump in potentially ahead of the curve. If all goes well, I can very much see them being the beacon for other teams in the organization to see what was their experience and were they successful and is this something that they may want to pursue or start migrating from other, let’s say, development practices or systems that are established.
Mark Feffer: John, thanks very much.
Mark Feffer: That was John Marcantonio, head of platform evangelism and federated development at ADP Next Gen.
Mark Feffer: And this has been PeopleTech, from the HCM Technology Report. This edition was sponsored by ADP. Next Gen HCM, designed for how teams work. Learn more at flowofwork.ADP.com.
Mark Feffer: And to keep up with HR technology, visit the HCM Technology Report every day. We’re the most trusted source of news in the HR tech industry. Find us at www-dot-hcm-technology-report-dot-com. I’m Mark Feffer.
ADP has been around for more than 70 years, fulfilling payroll and other human resources services. Payroll processing is a complex business, involving the movement of money in accordance with regulatory and legal strictures.
From an engineering point of view, ADP has decades of software behind it, and a bright future of a platform company used by thousands of companies. Balancing the maintenance of old code while charting a course with the new projects is not a simple task.
Tim Halbur is the Chief Architect of ADP, and he joins the show to talk through how engineering works at ADP, and how the organization builds for the future of the company while maintaining the code of the past.
Sponsorship inquiries: firstname.lastname@example.org
Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.
“The best employees love what they do”, former PepsiCo CEO Indra Nooyi once responding to a crowd of bright-eyed summer IT interns. Quite a simple answer to the question “what do the best employees do, and how can I be like them?”. The more you think about the fast response Indra had given, the more you realize it extends to more than simply ‘what’ we do. As one of those interns, I didn’t quite understand the depth of this statement until I had exposure to different roles later in my career.
Loving what you do is inherently important, yes, but it’s not exactly the single most important aspect to making you an incredible employee. Whether people want to acknowledge it or not, what you do for work becomes part of you. For example, I may have been called a ‘tech bro’ once or twice after being asked what industry I’m in, to which I usually respond: “I’m not fixing your computer”. After the laughter calms down, I usually begin talking about the work I do, and then inevitably, the “are you happy?” question arises. Most people respond fairly quickly, and to varying degrees, but if you stop and think about the rounded picture of happiness at your job, it usually comes down to one thing: personal fulfillment based on the investment of your time. Many concepts can be abstracted from this — work relationships/enjoyment, culture-driven benefits, monetary compensation, societal impact, etc. How many start-ups have you heard say that they’re going to change the world? Have you ever stopped to think about this as a marketing ploy, rather than a vision statement?
We all have different opinions on what’s heavier when it comes to our personal happiness, but I’ve found that one particularly weighs more than others, and that’s culture. From an organizational perspective, culture is tough to get right and easy to ruin, but when fostered correctly can truly breed the best and happiest employees. Culture is something that’s created organically, targeting the basic human desire to belong, fit in, and feel like a contributor to the group. A group that allows for the dynamism of thought, and freedom to express it without judgment, enables each individual to feel heard while allowing the best outcome to become possible. This, in turn, opens up avenues for new and personal discussions between individuals, potentially turning into friendships. When an employee goes into work every morning, considering his/her coworkers to be people he/she looks forward to working with, that’s where the magic happens, and culture is born. Good relationships grow general satisfaction with a person’s environment. Happy employees are much more likely to go out of their way to make the organization succeed, which full circle, makes them a good employee in the eyes of the organization.
How often in pop culture is work-life balance portrayed with someone comically saying “I don’t know any of you outside of work”. Now, full disclosure, I’ve never once heard anyone explicitly say this, but have seen individuals act in this way. This mentality simply doesn’t instill a sense of trust, no matter what organization or industry you work in. Don’t get me wrong, separation is good and healthy to have, but communication is key.
At Lifion, by ADP, we’re in the business of ‘Human Capital Management’, and we understand that if an organization doesn’t manage its resources correctly, the strongest culture won’t prevail. Everyone wants to be treated as an individual, rather than just another employee. No inhuman litmus test for happiness should ever suffice for a one on one, a drink together at happy hour, or a sincere “hey, how are you doing?”.
In summary, you never know the amount of impact you have on someone’s daily work experience. People are more than just resources to get a job done and respond well when treated as an individual. At the end of the day, you can have the best strategy in the world, but with no one to build it, it’s not worth anything. Simply put, whether you are a company, a manager, or a coworker, the best advice I can give: be human.
Let’s talk about you and me and how we used to find unique items before ES6. We really only had two ways to do it (if you had another one let me know). On the first one, we would create a new emtpy object, iterate through the items we wanted to deduplicate, we would create a new property using the item as the key and something like “true” as the value, then we would get the list of keys of that new object and we were done. In the second way, we would create a new empty array, iterate through the items, and for each item, check if the item existed in the array, if it was already there, continue, if not, add it. By the end the array would contain all the unique items.
ES6 introduced Sets, a new data structure with a very simple API to handle unique items that is not just convenient but also very fast. The intention of this article is to introduce you to some new methods coming to Sets soon that will make them even more useful, but before, let’s remember the basics.
To create a new set we only need to use the constructor. We can optionally pass any iterator, such as an array or a string, and the iterated items will become elements of the new set (repeated items will be ignored).
const emptySet = new Set(); const prefilledSet = new Set(['
Since Lifion’s inception as ADP’s next-generation Human Capital Management (HCM) platform, we’ve made an effort to embrace relevant technology trends and advancements. From microservices and container orchestration frameworks to distributed databases, and everything in between, we’re continually exploring ways we can evolve our architecture. Our readiness to evaluate non-traditional, cutting edge technology has meant that some bets have stuck whereas others have pivoted.
One of our biggest pivots has been a shift from self-managed databases & streaming systems, running on cloud compute services (like Amazon EC2) and deployed with tools like Terraform and Ansible, towards fully cloud-managed services.
When we launched the effort to make this shift in early 2018, we began by executing a structured, planned initiative across an organization of 200+ engineers. After overcoming the initial inertia, the effort continued to gain momentum, eventually taking a life of its own, and finally becoming fully embedded in how our teams work.
Along the way, we’ve been thinking about what we can give back. For example, we’ve previously written about a node.js client for AWS Kinesis that we’re working on as an open source initiative.
AWS’s re:Invent conference is perhaps the largest global cloud community conference in the world. In late 2018, we presented our cloud transformation journey at re:Invent. As you can see in the recording, we described our journey and key learnings in adopting specific AWS managed services.
In this post, we discuss key factors that made the initiative successful, its benefits in our microservice architecture, and how managed services helped us shift our teams’ focus to our core product while improving overall reliability.
The notion of services sharing databases, making direct connections to the same database system and being dependent on shared schemas, is a recognized micro-service anti-pattern. With shared databases, changes in the underlying database (including schemas, scaling operations such as sharding, or even migrating to a better database) become very difficult with coordination required between multiple service teams and releases.
As Amazon.com CTO Werner Vogels writes in his blog:
Each service encapsulates its own data and presents a hardened API for others to use. Most importantly, direct database access to the data from outside its respective service is not allowed. This architectural pattern was a response to the scaling challenges that had challenged Amazon.com through its first 5 years…
And Martin Fowler on integration databases:
On the whole integration databases lead to serious problems becaue [sic] the database becomes a point of coupling between the applications that access it. This is usually a deep coupling that significantly increases the risk involved in changing those applications and making it harder to evolve them. As a result most software architects that I respect take the view that integration databases should be avoided.
Applying the database per service principal means that, in practice, service teams have significant autonomy in selecting the right database technologies for their purposes. Among other factors, their data modeling, query flexibility, consistency, latency, and throughput requirements will dictate technologies that work best for them.
Up to this point, all is well — every service has isolated its data. However, when architecting a product with double digit domains, several important database infrastructure decisions need to be made:
When we first started building out our services, we had a sprawl of supporting databases, streaming, and queuing systems. Each of these technologies was deployed on AWS EC2, and we were responsible for the full scope of managing this infrastructure: from the OS level, to topology design, configuration, upgrades and backups.
It didn’t take us long to realize how much time we were spending on managing all of this infrastructure. When we made the bet on managed services, several of the decisions we’d been struggling with started falling into place:
On our Lifion engineering blog, we’ve previously written about our Lifion Developer Platform Credos. One of these speaks to the evolutionary nature of our work:
When we started adopting managed services, we went for drop-in replacements first (for example, Aurora MySQL is wire compatible with the previous MySQL cluster we were using). This approach helped us to get some early momentum while uncovering dimensions like authentication, monitoring, and discoverability that would help us later.
Our evolutionary architecture credo helped to ensure that the transition would be smooth for our services and our customers. Each deployment was done as a fully online operation, without customer impact. We recognize that we will undergo more evolutions, for which we intend to follow the same principles.
There is nothing quite like the existential dread of realizing you may have a performance issue. Months, possibly years, worth of codebase are suddenly now failing you in a live environment. Reproduction is difficult, and even if you can reproduce it on your dev machine your usual methods of debugging are likely useless. Sure, you can litter your code with entirely too much analytics, or leverage Node’s own profiling, but this results in a data hose that takes a deep understanding of Node, V8, and data science itself to make sense of it. This is a rather dire situation.
A common scenario in any service is you may have some asynchronous operation that you need to perform on a data set of unknown length. Your first instinct may look something like this:
You quickly discover that this is not very fast. Simple interrogation of CPU utilization will show Node is spending a lot of time waiting. Depending on your lint rules, you may not even be able to get this code past a CI build. The solution for many is to simply fire all the promises then await all of them. That, in and of itself, is not a bad thing; for example, say you know that every time you execute a given route, you need to make 3 unrelated calls to other services. Since you know it will always create 3 promises, this is bounded and safe. The problems arise when the promise creation is potentially unbounded.
You test your code and discover it is much faster, Node is not spending time waiting around, and the lint error is gone. You merge and deploy your code, all the integration tests pass, and you go home to watch The Expanse. Suddenly, right when the Rocinante (a legitimate salvage) is entering a tense battle, you start getting PagerDuty alerts. Your services containers are getting restarted constantly due to Out of Memory exceptions, the container host is experiencing socket exhaustion, and your response times are through the roof. You have a major performance and scalability issue.
This is a nice clean scenario, so the suspect code is easily identified, but what if it isn’t? Why does this code fail to scale? This is where Clinic.js shines. For the sake of exercising Clinic.js, I created 6 scenarios that all perform the same asynchronous work some n times using different strategies. They are using for…of, the new ES2018 for await…of, the Promise.all() scenario described above, using Bluebird.map() with a concurrency limit set, the promise-limit module, and the p-limit module.
For the test, I start a clustered web server that accepts a get and post request. Both verbs will perform some arbitrary work before completing the request. In the test itself, I get the data, do arbitrary work, and post the results back. In both cases the work is primarily parsing JSON and manipulating the object in an attempt to simulate a common Node workload. Each test run does this work 500 times, for each element in a mock collection. For the limiting modules, they are all set to allow 10 concurrent promises. All tests were performed using Node v12.14.0. The average execution times over 10 test runs can be seen below.
╔═══════════════╦══════════════════════════════════╗ ║ Test ║ Average Execution Time (Seconds) ║ ╠═══════════════╬══════════════════════════════════╣ ║ await-for-of ║ 6.943 ║ ║ bluebird-map ║ 4.550 ║ ║ for-of ║ 6.745 ║ ║ p-limit ║ 4.523 ║ ║ promise-all ║ 4.524 ║ ║ promise-limit ║ 4.457 ║ ╚═══════════════╩══════════════════════════════════╝
The results here certainly reflect the performance improvement when going from for…of to issuing concurrent promises, but the limiting libraries have similar execution times. Let’s dive deeper into what is going on.
In the for await…of and for..of scenarios (respectively), the CPU utilization is bouncing around and is averaging well below 100%. This means the Node process is spending a lot of time waiting around. The execution time is also non-ideal at roughly 7 seconds averaged over 10 runs. Clinic detects this and reports a potential I/O wait issue bottlenecking the process. It recommends running a ‘bubbleprof’ to better identify the issue.
Note the light blue segment in the graph, this represents network I/O waits. It is spending quite a bit of time waiting for network operations. Perhaps we can speed this up by issuing them concurrently.
In the Promise.all() scenario, execution time has improved dramatically to 4.5 seconds on average. The CPU is now being efficiently utilized. However, notice the memory usage spiking up much higher with incredibly high event loop delay. Clinic detects this issue and reports a potential event loop issue. Why is this?
As you can see, the promise callback is invoked first, then it executes to the end of the block. Only after that do the `then` blocks fire. This, however, is only part of the story.
In the case of the Promise.all() code above, it will create a request for every item in the collection before any `then` block is resolved. This is true even if the early requests complete before it reaches the end of the collection. It will not process the responses until the event loop ticks over. This is due to I/O callbacks not being invoked until the event loop ticks into its I/O phase. It is only at this point `.then()`, or the code block proceeding `await`, is executed.
Ultimately the creation of an unbounded number of promises will result in all of the response objects becoming queued in memory and waiting for the event loop to tick over. This can swamp the Node process with excessive memory allocations, or worse, cause out of memory events. When these objects go out of scope, it will freeze up when the garbage collector executes as it is primarily a blocking, O(N) operation. It also means when the event loop finally does tick over, all of the `.then()` callbacks of completed requests fire synchronously. If this contains heavy work, the event loop will be stuck in the callback phase until this work is completed or another truly asynchronous operation is encountered. This means any events, inbound requests, timers, etc. will not be handled until the ‘.then()’ callbacks are all completed. The biggest concern here is that response times can spike to several seconds. To solve this, we need to limit the number of concurrent promises created.
As indicated in the graphs, Bluebird.map() with a concurrency limit pretty much eliminates the event loop blocking and the high memory usage while maintaining efficient CPU utilization. It matches the performance of Promise.all(), completing the test in 4.5s. Bluebird however has lost its performance edge in recent versions of V8 when compared to native promises and is a rather heavy complete Promise replacement. Let’s consider some other native promise options.
The promise-limit module also resolves our issue, and against matches the performance of Promise.all() at 4.5s. This is due to the aforementioned pressure on Node’s memory system creating unnecessary memory allocations and complex garbage collection events. The overhead of this module does get flagged by Clinic.js as creating some event loop blocking. It notably also is nowhere near as popular as the last module, p-limit.
The p-limit module has the least amount of event loop blocking of all the options, while providing the same performance as Promise.all() at 4.5s. Given the popularity compared to promise-limit and the performance of this module, this is the clear winner.
Clinic.js enables introspection into how your Node application is performing beyond simple top line execution times. It introspects not only your resource utilization, but what the code itself is doing. By no means have I explored all the capabilities of this tool here, however it grants us insight into why a specific codepath may appear to perform without issue on its own, but wreak havoc inside of a live service. NearForm presented this to us during Node Day 2019 and it has quickly become an indispensable debugging tool for us. Its ease of use and clear presentation helps not only quickly identify problematic code, but also convey the “why” to others who are not deep into the debugging process with you. It was the obvious choice to illustrate the issues with unbounded promise creation.