« All Blogs

3D illustration of gavel and file folder labled AI ETHICS

AI and Data Ethics: 5 Principles to Consider

As organizations develop their own internal ethical practices and countries continue to develop legal requirements, we are at the beginning of determining standards for ethical use of data and artificial intelligence (AI).

In the past 20 years, our ability to collect, store, and process data has dramatically increased. There are exciting new tools that can help us automate processes, learn things we couldn’t see before, recognize patterns, and predict what is likely to happen. Since our capacity to do new things has developed quickly, the focus in tech has been primarily on what we can do. Today, organizations are starting to ask what’s the right thing to do.

This is partly a global legal question as countries implement new requirements for the use and protection of data, especially information directly or indirectly connected to individuals. It’s also an ethical question as we address concerns about bias and discrimination, and explore concerns about privacy and a person’s rights to understand how data about them is being used.

What is AI and Data Ethics?

Ethical use of data and algorithms means working to do the right thing in the design, functionality, and use of data in Artificial Intelligence (AI).

It’s evaluating how data is used and what it’s used for, considering who does and should have access, and anticipating how data could be misused. It means thinking through what data should and should not be connected with other data and how to securely store, move, and use it. Ethical use considerations include privacy, bias, access, personally identifiable information, encryption, legal requirements and restrictions, and what might go wrong.

Data Ethics also means asking hard questions about the possible risks and consequences to people whom the data is about and the organizations who use that data. These considerations include how to be more transparent about what data organizations have and what they do with it. It also means being able to explain how the technology works, so people can make informed choices on how data about them is used and shared.

Why is Ethics Important in HR Technology?

Technology is evolving fast. We can create algorithms that connect and compare information, see patterns and correlations, and offer predictions. Tools based on data and AI are changing organizations, the way we work, and what we work on. But we also need to be careful about arriving at incorrect conclusions from data, amplifying bias, or relying on AI opinions or predictions without thoroughly understanding what they are based on.

We want to think through what data goes into workplace decisions, how AI and technology affect those decisions, and then come up with fair principles for how we use data and AI.

What Are Data Ethics Principles?

Ethics is about acknowledging competing interests and considering what is fair. Ethics asks questions like: What matters? What is required? What is just? What could possibly go wrong? Should we do this?

In trying to answer these questions, there are some common principles for using data and AI ethically.

  1. Transparency – This includes disclosing what data is being collected, what decisions are made with the assistance of AI, and whether a user is dealing with bots or humans. It also means being able to explain how algorithms work and what their outputs are based on. That way, we can evaluate the information they give us against the problems we’re trying to solve. Transparency also includes how we let people know what data an organization has about them and how it is used. Sometimes, this includes giving people an opportunity to have information corrected or deleted.
  2. Fairness – AI doesn’t just offer information. Sometimes it offers opinions. This means we have to think through how these tools and the information they give us are used. Since data comes from and concerns humans, it’s essential to look for biases in what data is collected, what rules are applied, and what questions are asked of the data. For example, if you want to increase diversity in hiring, you don’t want to only rely on tools that tell you who has been successful in your organization in the past. This information alone would likely give you more of the same rather than more diversity. While there is no way to completely eliminate bias in tools created by and about people, we need to understand how the tools are biased so we can reduce and manage the bias and correct for it in our decision making.
  3. Accuracy – The data used in AI should be up to date and accurate. And there needs to be ways to correct it. Data should also be handled, cleaned, sorted, connected, and shared with care to retain its accuracy. Sometimes taking data out of context can make it appear misleading or untrue. So accuracy depends partly on whether the data is true, and partly on whether it makes sense and is useful based on what we are trying to do or learn.
  4. Privacy – Some cultures believe that privacy is part of fundamental human rights and dignity. An increasing number of privacy laws around the globe recognize privacy rights in our names and likeness, financial and medical records, personal relationships, homes, and property. We are still working out how to balance privacy and the need to use so much personal data. Law makers have been more comfortable allowing broader uses of anonymized data than data where you know, or can easily discover, who it’s about. But as more data is collected and connected, questions arise about how to maintain that anonymity. Other privacy issues include security of the information and what people should know about who has data about them and how its used.
  5. Accountability – This is not just compliance with global laws and regulations. Accountability is also about the accuracy and integrity of data sources, understanding and evaluating risks and potential consequences of developing and using data and AI, and implementing processes to make sure that new tools and technologies are created ethically.

As organizations develop their own internal ethical practices and countries continue to develop legal requirements, we are at the beginning of determining standards for ethical use of data and AI.

ADP is already working on its AI and data ethics, through establishing an AI and Data Ethics Board and developing ethical principles that are customized to ADP’s data, products and services. Next in our series on AI and Ethics, we will be talking to each of ADP’s AI and Data Ethics Board members about ADP’s guiding ethical principles and how ADP applies those principles to its design, processes, and products.

Read our position paper, “ADP: Ethics in Artificial Intelligence,” found in the first blade underneath the intro on the Privacy at ADP page.

« All Blogs

Paper currency flying through the air

Podcast: ADP’s Brianne Wilson Explains Compensation Philosophy, Why It Matters

Mark Feffer: Welcome to PeopleTech, the podcast of the HCM Technology Report. I’m Mark Feffer.

This edition of PeopleTech is brought to you by ADP. Its Next Gen HCM is designed for how teams work, and helps you break down silos, improve engagement and performance, and create a culture of connectivity. Learn more at flowofwork.ADP.com.

Today, I’m speaking with Brianne Wilson, manager of product management for core HR, compliance and compensation at ADP. We’re going to talk about, obviously, compensation—and compensation philosophy, things you should consider when designing your compensation plan, and why it all matters. It’s not as obvious as you might think.

Podcast: Art, science, compensation philosophy, and why it all matters. Fascinating discussion w/ADP’s Brianne Wilson. @ADP #HR #HRTech #HRTribeCLICK TO TWEET

Brianne, thanks for being here.

First, can you tell me what’s a compensation philosophy, and as employee expectations change, does the compensation philosophy change with it?

Brianne: That’s a really great question. Starting with the compensation philosophy, if we went by my handy textbook, the way to think about it is there’s a lot of metrics out there of what are people being paid in a certain job, in a certain location, at a certain type of company. But when it really comes down to it, as leaders in your organization … Say we’re just starting a business together, and we’re really thinking about how we want to pay people. Your compensation philosophy is your mission statement for how you reward your associates.

While you may have a certain job that makes a certain range, you can say, “We want to be competitive.” While project managers in New York City, may make XY in a salary range, we know that there’s some really great talent here in New York City, and so in order to be more desirable—and we know the hard work that project managers put in—we’re going to increase our range in this particular area, and invest in this area to draw in more of the top talent.

Whereas, there are other areas where maybe we don’t need to invest quite as much. And that’s really what your compensation philosophy is. It’s not so much making sure, if we’re paying people what they expect in the market. It’s really setting that vision statement for yourself.

I talk a lot about that with my teams, in the products we’re building, of compensation being… We often think of it as a science, but if there is an art to it. So it’s an art and a science, but at its core it’s deeply personal, because what you’re paying someone is what motivates them to show up each day. It’s the way a company reflects their investment and respect in you. It’s how they recognize the work that you are contributing, and at the end of the day that’s how you put food on the table and put a roof over your head. Making sure leaders are keeping that in mind helps contribute to a really strong compensation philosophy.

In terms of how that’s changing today, even just what’s happening right now in the world, it all ties to compensation on top of that. The younger generation, there’s a trend now in sharing salary ranges on job postings, which we used to not do. It was very not okay to ever bring up the compensation question in your interviews until you’ve already invested tons of time interviewing. That’s a huge shift, and if we think about the momentum that’s happening… We actually saw this morning on Twitter, somebody saying, “Hey, these companies that are saying they’re progressive, why aren’t you posting your salaries?”

That’s what these upcoming generations are expecting, real transparency in pay, because we don’t live to work. We work to live, and the best way to reduce biases, the best way to ensure everybody has a fair and equal shot is really making sure you know what those salaries and bonus plans and stock options are like.

If you have that strong compensation philosophy, your ability to be transparent to the public about what you’re paying people ideally, and likely to be able to happen together.

Mark: The compensation philosophy and transparency, do they go hand in hand? Or is transparency a part of compensation philosophy?

Brianne: I’d say it’s the latter. The ability to be transparent would be a part of your philosophy. We intend to invest in these areas. We are going to be transparent with the public across all of our jobs. We are going to list them accurately to everyone, so that anyone who’s applying, everybody who works here knows what each other makes. That could be your compensation philosophy.

Mark: As you mentioned, the desires or the demands of employees change over time. How has comp changed over time to meet those demands, especially as the workforce has gotten younger?

Brianne: They are being forced to become more transparent. I’ve seen it happen. [Imagine] if somebody shares with their colleague what they make. And so two people who have the same role uncover there’s a huge disparity, and that disparity might be across a man versus a woman, a white person versus a person of color. This younger generation is just so empowered in speaking up for themselves. That’s going to happen, they’re going to go to leadership and say, “I contribute the same amount of work. I have the same job. I found out this person makes X percentage more than me.” So that compensation philosophy of incorporating transparency is a direct result of those changing expectations.

I think it’s also the way we are operating as a country: The high cost of living, the extraordinary amount of student debt, that especially these younger generations are shouldering as they leave university, the expectation to understand, Am I going to be able to live off of what you might be offering me, and I’m going to work really hard, especially in the tech industry? If I’m going to be putting in a lot of hours, what’s your investment in me, because it’s extraordinarily expensive to keep a roof over your head.

Mark: How is it that companies get their compensation wrong, and why do you think they get it wrong?

Brianne: For me, it all stems back to that idea of a compensation philosophy. Compensation, there are people who are experts in this field. There are actual compensation practitioners. There are certification courses in how do you not only create a philosophy, but how do you actually create structures around that? It’s not always an area that companies are able to invest in, or are aware that it even exists. I’ve worked at many startups, so it wasn’t really until I came here that I even was aware that this role really existed.

I think areas where I’ve seen we sometimes get it wrong is relying solely on the science piece. Organizations understand, “Let’s pull survey data. Let’s go on websites that promote what these salaries are in a certain area, and we’ll just go by those.” If you aren’t being strategic and you’re not thinking about where you want to make that investment, to really pull in top talent, then you might lose out on the people you really want to invest in your for company and who’ll provide the work that you’re looking for.

Often times it’s like a moving target. Sometimes with your compensation structures it’s, “Okay, we’ve done our surveys, and we’ve created our job grades, and we figured out some way to adjust for cost of living.” But it’s not focused enough on enough different criteria or job grades enough to ensure you’re making up for all of the different ways you could be paying for someone.

Where you live is just really one thing that would have an impact on what you should be making, and [how you’re] managing it. Making sure you’re reviewing it on a frequent basis. Some companies only review their compensation structures every three years. It depends on your industry, of course. The public sector is very different from the private sector. You have more leeway in the private sector than you do in the public sector, but I’ve seen them be very much just output-oriented. “Okay, here are our ranges and we’re paying everybody inside the right ranges, and everybody’s comp ratio is 1, and we make sure our high performers are above a 1.” But really it’s about taking that human aspect into consideration when you’re making compensation decisions, and thinking beyond outputs, thinking of outcomes and thinking of insights and impact. It’s not just about your budget.

Some places will start with a budget, and say, “Well, here’s how much money we have. What can we give people?” So they’re not even taking surveys into consideration. I often advise people, “You should have your compensation structure and your compensation philosophy completely outside of your budget, and then figure out how your budget can make that work.”

Mark: You talked before about the science behind compensation, and mentioned that a lot of employers depend on surveys. How does that work out, do you think? Where do surveys fall short?

Brianne: I’m not envious of anybody who has to make these decisions. I have the fun job of just figuring out how to help them.

It’s a lot of numbers. Are you pulling from enough surveys? Are you pulling from the right surveys? Is the population size large enough? And that’s still just the science of getting, “Okay, all product managers in New York City have on average, this looks to be about their range.”

Eventually, enough survey data can get you to that, but again surveys won’t highlight where we’re making missteps as a society, or in different locations. Is that average salary range for product manager in New York missing what the actual cost of living adjustment needs to be? What’s happening in each location? E

Even if you use the surveys to create your structures, when you go in as a manager … This happened to me, the first time I had to do my compensation reviews for my direct reports, I got really, really stressed out, and I was the last person who should feel terrified of this, based on my job. I got really, really nervous. I was like, “Oh, my gosh. There’s all this information that’s coming at me. Oh God, I’ve got a minimum and a max, and here’s my budget and what does it mean? What if I’m a horrible person? What if I just really feel like being mean today? I don’t think this is accurate, but what if I don’t get along with the people who report to me? How do I know that I’m making the right decision?”

I didn’t feel that the numbers were enough, because everybody who reports to me, in my opinion, I’m very lucky to say, they’re all high performers. That doesn’t mean they’re high performers all in the same way. That’s something that surveys cannot assist you in. Even performance reviews, which is your way of evaluating people, that’s still bringing qualitative into the science. Even two of my direct reports, even if they both get four out of five on their performance review, that doesn’t mean those fours mean the same thing. What if there’s a person who’s always been a four? What about somebody who was a two and now they’re a four? What if I knew something was going on with one of my direct reports? They were having a personal tragedy that I knew impacted the work they were doing. There’s a lot of personal touches when you’re making those decisions that simple survey data and compensation structures just really cannot be able to spit out a number and tell you what to do as a manager.

Mark: ADP has compensation data, and I wondered if you could tell me what’s the role ADP’s compensation data can play, and also why is it unique? Why is it valuable?

Brianne: The main thing is the sheer amount of data that we have. ADP processes payrolls for one in six Americans. So we have a ton of data of what we are paying people, and there’s a lot of different ways we can slice and dice that data, to provide insights.

That’s been a big focus for ADP—how do we translate all that data, all that science? Yes, we can contribute to the science. The science is important. We just have this sheer wealth of data that is unlike any other organization when it comes to what people are paying people.

The technology that we have to provide insights, I think, is where we’re really making a huge difference because you can uncover things around diversity and inclusion, and whether or not there’s any unconscious bias happening at your organization, to help you better contribute to that compensation philosophy.

I heard an example of a place where they gave a differential based on gender. They wanted to close the wage gap. They understood that there was a bias happening, so what if we took things like that, those things that we just might not be aware of to add to our compensation structures? Just that sheer wealth of data that ADP has helps to figure out where are we making missteps. I think that’s where we really become powerful in the compensation world as we keep growing.

Mark: My last question is, what do you think the future looks like in terms of compensation? And how do you see ADP building toward it?

Brianne: I think what I see for the future of compensation, it really comes down to shifting that focus from being a science to understanding it’s an art, and being incredibly personal.

Again, the need to shift to transparency, the upcoming generations of our workforce demanding that transparency, and advocating for themselves, the cost of living, the student loans that we’re shouldering, all those things we’ve already talked about here today.

I think that’s what compensation is shifting, making those shifts to being more insight- and impact-driven. Taking those insights and figuring out how we can make change, I think is where I’m seeing compensation heading.

That’s my goal for the compensation products, and where ADP is heading is how do we keep collecting this data, and start advising leadership, and advising leadership and advising our managers on.. maybe you need to make this consideration in your compensation structure. Again, I think it’s such a great example of finding areas where you can put premiums on, give mall percentages here there, to make up for the fact that there might be bias in your organization. Publishing your agenda, your practice of how you create your compensation strategy. This is where I’m seeing things heading more and more. It’s not just going to be compensation practitioners who are aware of how the decisions are being made. We’re starting to show breakdowns of, “You got a 12% increase at your annual review, but here’s all of the decision points that went into it: Your merit because of your performance rating, you got a cost-of-living adjustment, you got a promotion increase, or any other number of reasons.

Really communicating at all rankings in an organization, of why every single compensation decision is made is where it’s heading. I feel like it’s always been a black box. I think that black box is very much about to be very much blown wide open in the coming years as compensation keeps scaling.

Mark: Brianne, thank you very much.

Brianne: Thank you. I love talking about compensation, so anytime.

Mark: That was Brianne Wilson, manager of product management for core HR, compliance and compensation at ADP.

And this has been PeopleTech, from the HCM Technology Report. This edition was sponsored by ADP. Next Gen HCM, designed for how teams work. Learn more at flowofwork.ADP.com.

And to keep up with HR technology, visit the HCM Technology Report every day. We’re the most trusted source of news in the HR tech industry. Find us at www-dot-hcm-technology-report-dot-com. I’m Mark Feffer.

« All Blogs

ADP logo

Podcast: Insights into ADP Engineering with CTO Tim Halbur

ADP has been around for more than 70 years, fulfilling payroll and other human resources services. Payroll processing is a complex business, involving the movement of money in accordance with regulatory and legal strictures. 

From an engineering point of view, ADP has decades of software behind it, and a bright future of a platform company used by thousands of companies. Balancing the maintenance of old code while charting a course with the new projects is not a simple task. 

Tim Halbur is the Chief Architect of ADP, and he joins the show to talk through how engineering works at ADP, and how the organization builds for the future of the company while maintaining the code of the past.

Sponsorship inquiries: sponsor@softwareengineeringdaily.com

Transcript

Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.

« All Blogs

Two ADP employees having a casual conversation

Does culture really eat strategy for breakfast?

https://eng.lifion.com/yes-culture-does-eat-strategy-for-breakfast-638ae19fc506

Yes, Culture DOES Eat Strategy for Breakfast

Jude Murphy
Jude Murphy

Nov 6, 2019 · 3 min read

« All Blogs

Illustration of African American women

Let’s Talk About Sets

https://eng.lifion.com/lets-talk-about-sets-813dfeb2185

 

Let’s Talk About Sets

A re-introduction to JavaScript Sets and the new Set methods

Edgardo Avilés

Mar 1, 2019 · 5 min read

Let’s talk about you and me and how we used to find unique items before ES6. We really only had two ways to do it (if you had another one let me know). On the first one, we would create a new emtpy object, iterate through the items we wanted to deduplicate, we would create a new property using the item as the key and something like “true” as the value, then we would get the list of keys of that new object and we were done. In the second way, we would create a new empty array, iterate through the items, and for each item, check if the item existed in the array, if it was already there, continue, if not, add it. By the end the array would contain all the unique items.

ES6 introduced Sets, a new data structure with a very simple API to handle unique items that is not just convenient but also very fast. The intention of this article is to introduce you to some new methods coming to Sets soon that will make them even more useful, but before, let’s remember the basics.

Here in Lifion we are big users of JavaScript, about 90% of our platform services are Node.js-based. If you are interested to see some examples of how Sets are used in our codebase, check our open source projects in Lifion’s GitHub profile.

The basics of Sets

To create a new set we only need to use the constructor. We can optionally pass any iterator, such as an array or a string, and the iterated items will become elements of the new set (repeated items will be ignored).

const emptySet = new Set();
const prefilledSet = new Set(['

« All Blogs

Person on ladder reaching up into the clouds

Lifion at ADP’s cloud transformation journey

https://eng.lifion.com/lifions-cloud-transformation-journey-2333b7c0897d

 

Lifion’s Cloud Transformation Journey

On moving to managed services in a microservice architecture

Zaid Masud
Zaid Masud

Mar 26, 2019 · 5 min read

Since Lifion’s inception as ADP’s next-generation Human Capital Management (HCM) platform, we’ve made an effort to embrace relevant technology trends and advancements. From microservices and container orchestration frameworks to distributed databases, and everything in between, we’re continually exploring ways we can evolve our architecture. Our readiness to evaluate non-traditional, cutting edge technology has meant that some bets have stuck whereas others have pivoted.

One of our biggest pivots has been a shift from self-managed databases & streaming systems, running on cloud compute services (like Amazon EC2) and deployed with tools like Terraform and Ansible, towards fully cloud-managed services.

When we launched the effort to make this shift in early 2018, we began by executing a structured, planned initiative across an organization of 200+ engineers. After overcoming the initial inertia, the effort continued to gain momentum, eventually taking a life of its own, and finally becoming fully embedded in how our teams work.

Along the way, we’ve been thinking about what we can give back. For example, we’ve previously written about a node.js client for AWS Kinesis that we’re working on as an open source initiative.

AWS’s re:Invent conference is perhaps the largest global cloud community conference in the world. In late 2018, we presented our cloud transformation journey at re:Invent. As you can see in the recording, we described our journey and key learnings in adopting specific AWS managed services.

In this post, we discuss key factors that made the initiative successful, its benefits in our microservice architecture, and how managed services helped us shift our teams’ focus to our core product while improving overall reliability.

Why Services Don’t Share Databases

The notion of services sharing databases, making direct connections to the same database system and being dependent on shared schemas, is a recognized micro-service anti-pattern. With shared databases, changes in the underlying database (including schemas, scaling operations such as sharding, or even migrating to a better database) become very difficult with coordination required between multiple service teams and releases.

As Amazon.com CTO Werner Vogels writes in his blog:

Each service encapsulates its own data and presents a hardened API for others to use. Most importantly, direct database access to the data from outside its respective service is not allowed. This architectural pattern was a response to the scaling challenges that had challenged Amazon.com through its first 5 years…

And Martin Fowler on integration databases:

On the whole integration databases lead to serious problems becaue [sic] the database becomes a point of coupling between the applications that access it. This is usually a deep coupling that significantly increases the risk involved in changing those applications and making it harder to evolve them. As a result most software architects that I respect take the view that integration databases should be avoided.

The Right Tool for the Job

Applying the database per service principal means that, in practice, service teams have significant autonomy in selecting the right database technologies for their purposes. Among other factors, their data modeling, query flexibility, consistency, latency, and throughput requirements will dictate technologies that work best for them.

Up to this point, all is well — every service has isolated its data. However, when architecting a product with double digit domains, several important database infrastructure decisions need to be made:

  • Shared vs dedicated clusters: Should services share database clusters with logically isolated namespaces (like logical databases in MySQL), or should each have its own expensive cluster with dedicated resources?
  • Ownership: What level of ownership does a service team take for the deployment, monitoring, reliability, and maintenance of their infrastructure?
  • Consolidation: Is there an agreed set of technologies that teams can pick from, is there a process for introducing something new, or can a team pick anything they like?

From Self-Managed to Fully Managed Services

When we first started building out our services, we had a sprawl of supporting databases, streaming, and queuing systems. Each of these technologies was deployed on AWS EC2, and we were responsible for the full scope of managing this infrastructure: from the OS level, to topology design, configuration, upgrades and backups.

It didn’t take us long to realize how much time we were spending on managing all of this infrastructure. When we made the bet on managed services, several of the decisions we’d been struggling with started falling into place:

  • Shared vs dedicated clusters: Dedicated clusters for services, clearly preferable from a reliability and availability perspective, became easier to deploy and maintain. Offerings like SQS, DynamoDB, and Kinesis with no nodes or clusters to manage removed the concern altogether.
  • Ownership: Infrastructure simplification meant that service teams were able to develop further insight into their production usages, and take greater responsibility for their infrastructure.
  • Consolidation: We were now working with a major cloud provider’s service offerings, and found that there was enough breadth to span our use cases.

Evolutionary Architecture

On our Lifion engineering blog, we’ve previously written about our Lifion Developer Platform Credos. One of these speaks to the evolutionary nature of our work:

  • Build to evolve: We design our domains and services fully expecting that they will evolve over time.
  • Backwards compatible, versioned: Instead of big bang releases, we use versions or feature flags letting service teams deploy at any time without coordinating dependencies.
  • Managed deprecations: When deprecating APIs or features, we carefully plan the impact and ensure that consumer impact is minimal.

When we started adopting managed services, we went for drop-in replacements first (for example, Aurora MySQL is wire compatible with the previous MySQL cluster we were using). This approach helped us to get some early momentum while uncovering dimensions like authentication, monitoring, and discoverability that would help us later.

Our evolutionary architecture credo helped to ensure that the transition would be smooth for our services and our customers. Each deployment was done as a fully online operation, without customer impact. We recognize that we will undergo more evolutions, for which we intend to follow the same principles.

« All Blogs

Person gesturing toward large computer screen

Performance implications of misunderstanding Node.js promises

https://eng.lifion.com/promise-allpocalypse-cfb6741298a7

Promise.allpocalypse

The performance implications of misunderstanding Node.js promises

Ali Yousuf

Ali Yousuf

Jan 22 · 8 min read

for…of over unknown collection with await in loop
Promise.all() on an entire unknown collection

Benchmarking unbounded promise scenarios

╔═══════════════╦══════════════════════════════════╗
║     Test      ║ Average Execution Time (Seconds) ║
╠═══════════════╬══════════════════════════════════╣
║ await-for-of  ║                            6.943 ║
║ bluebird-map  ║                            4.550 ║
║ for-of        ║                            6.745 ║
║ p-limit       ║                            4.523 ║
║ promise-all   ║                            4.524 ║
║ promise-limit ║                            4.457 ║
╚═══════════════╩══════════════════════════════════╝
for…of test code
for await…of test code
Image for post

Image for post

Image for post

Image for post

Clinic.js doctor output for for await…of and for…of, respectively
Image for post

Image for post

Image for post

Image for post

Clinic.js bubbleprof output for for await…of and for…of, respectively
Promise.all() test code
Image for post

Image for post

Clinic.js doctor output for Promise.all()

Promise chain execution order example
Async chain execution order example
Bluebird.map() with concurrency limit test code
Image for post

Image for post

Clinic.js doctor output for Bluebird.map() with concurrency limit
promise-limit module test code
Image for post

Image for post

Clinic.js doctor output for the promise-limit module
p-limit module test code
Image for post

Image for post

Clinic.js Doctor output for the p-limit module

Conclusion