The sun shines brightly, and we look to a new day where the world of work is digitally and virtually connected. If you are a recent college graduate looking for an opportunity or someone looking to change the career path to try something new, this is an exciting time to research, explore, and reach recruiters.
The unfortunate side effect of the pandemic is that the world is now predominantly virtual, temporarily suspending shaking hands and making connections over coffee. It is still too early for in-person career fairs at the fancy convention centers to resume, or the opportunity to walk up to a recruiter and talk to them. The world has changed, a bit too fast for comfort. For the opportunities posted, there are a lot more applicants.
Now more than ever, your RESUME creates the first impression about you and your skills for recruiters and hiring managers. Your resume will determine if you are a “possible candidate” for an opening with their organization!
A resume is not an itemized bullet list of everything you’ve studied, where you’ve worked, or volunteered. Chances are recruiters will not have time to read, review, and recap all of that to determine your candidate eligibility. So, let your resume do the job it was meant to do!
Something to think about: When was the last time anyone read a full newspaper? Yes, every row and column on every page, to get the news? You only scan the highlights, right?
Your resume has one objective: To make highlight your skills and make an impression on a recruiter, so that you get invited for an interview! That’s it!
Your R.E.S.U.M.E. needs to highlight accomplishments that are relevant to the job posted. So, you may need to customize it if you plan to apply for jobs in different fields. Start thinking of your elevator pitch in document form. Easy, right?
It’s OK to showcase your skills. You are the only person who can present your talent. You are the builder of your aspirations, and the driver of your career.
Visualize yourself walking up to receive an award, and the greeter announcing your specific accomplishments that led to the award. This is about you, so why hide your skills in a bulleted list? Shine your skills brightly enough to be seen.
Now, start writing your two-page resume to present yourself as a “possible and strong candidate.” Use the pyramid to guide your way. If it’s been a while since you last updated your resume, for creative ideas, be sure to check out the new resume templates on Microsoft Word or a similar editing tool. Give it a try, please.
Next, connect your resume with your digital presence on LinkedIn. Download the LinkedIn mobile app. In a few easy steps, you can create your digital signature with a QR code and save it with your photos.
Smart tip: When a recruiter scans your “Unique QR code,” they can see your accomplishments and connect through LinkedIn Messaging. Super cool!
We live in a digital world, yet the resume is still a PDF or even a paper copy. No problem! Simply, add your QR code to your resume.
Invest in yourself, keep learning, and add your skills to your LinkedIn profile. Show the world skills that make you a dream hire for any organization!
Do all these things, and, yes, soon, you will have that email or in-mail message from a recruiter asking to call you for an interview. That next opportunity is just around the corner. Believe in yourself.
Best of Luck!
Jyotsna Manikantan is a Lead Product Manager, Portfolio Strategy & Operations, and is celebrating her 13th-year @ADP
“The best employees love what they do”, former PepsiCo CEO Indra Nooyi once responding to a crowd of bright-eyed summer IT interns. Quite a simple answer to the question “what do the best employees do, and how can I be like them?”. The more you think about the fast response Indra had given, the more you realize it extends to more than simply ‘what’ we do. As one of those interns, I didn’t quite understand the depth of this statement until I had exposure to different roles later in my career.
Loving what you do is inherently important, yes, but it’s not exactly the single most important aspect to making you an incredible employee. Whether people want to acknowledge it or not, what you do for work becomes part of you. For example, I may have been called a ‘tech bro’ once or twice after being asked what industry I’m in, to which I usually respond: “I’m not fixing your computer”. After the laughter calms down, I usually begin talking about the work I do, and then inevitably, the “are you happy?” question arises. Most people respond fairly quickly, and to varying degrees, but if you stop and think about the rounded picture of happiness at your job, it usually comes down to one thing: personal fulfillment based on the investment of your time. Many concepts can be abstracted from this — work relationships/enjoyment, culture-driven benefits, monetary compensation, societal impact, etc. How many start-ups have you heard say that they’re going to change the world? Have you ever stopped to think about this as a marketing ploy, rather than a vision statement?
We all have different opinions on what’s heavier when it comes to our personal happiness, but I’ve found that one particularly weighs more than others, and that’s culture. From an organizational perspective, culture is tough to get right and easy to ruin, but when fostered correctly can truly breed the best and happiest employees. Culture is something that’s created organically, targeting the basic human desire to belong, fit in, and feel like a contributor to the group. A group that allows for the dynamism of thought, and freedom to express it without judgment, enables each individual to feel heard while allowing the best outcome to become possible. This, in turn, opens up avenues for new and personal discussions between individuals, potentially turning into friendships. When an employee goes into work every morning, considering his/her coworkers to be people he/she looks forward to working with, that’s where the magic happens, and culture is born. Good relationships grow general satisfaction with a person’s environment. Happy employees are much more likely to go out of their way to make the organization succeed, which full circle, makes them a good employee in the eyes of the organization.
How often in pop culture is work-life balance portrayed with someone comically saying “I don’t know any of you outside of work”. Now, full disclosure, I’ve never once heard anyone explicitly say this, but have seen individuals act in this way. This mentality simply doesn’t instill a sense of trust, no matter what organization or industry you work in. Don’t get me wrong, separation is good and healthy to have, but communication is key.
At Lifion, by ADP, we’re in the business of ‘Human Capital Management’, and we understand that if an organization doesn’t manage its resources correctly, the strongest culture won’t prevail. Everyone wants to be treated as an individual, rather than just another employee. No inhuman litmus test for happiness should ever suffice for a one on one, a drink together at happy hour, or a sincere “hey, how are you doing?”.
In summary, you never know the amount of impact you have on someone’s daily work experience. People are more than just resources to get a job done and respond well when treated as an individual. At the end of the day, you can have the best strategy in the world, but with no one to build it, it’s not worth anything. Simply put, whether you are a company, a manager, or a coworker, the best advice I can give: be human.
ADP Business Anthropologist Martha Bird sat down with Daniel Litwin, the Voice of B2B, at CES 2020, discussing a wide range of topics related to how her anthropological work and research impacts businesses and consumer needs.
Bird has worked for numerous companies in the field of business anthropology since the early 2000s, working to create human-focused solutions to business needs.
Bird and Litwin touch on their CES experience, a modern focus on human-centered and human-responsive products and how those concepts affect consumer product development, consumer longing for personalized experiences, and more.
Let’s talk about you and me and how we used to find unique items before ES6. We really only had two ways to do it (if you had another one let me know). On the first one, we would create a new emtpy object, iterate through the items we wanted to deduplicate, we would create a new property using the item as the key and something like “true” as the value, then we would get the list of keys of that new object and we were done. In the second way, we would create a new empty array, iterate through the items, and for each item, check if the item existed in the array, if it was already there, continue, if not, add it. By the end the array would contain all the unique items.
ES6 introduced Sets, a new data structure with a very simple API to handle unique items that is not just convenient but also very fast. The intention of this article is to introduce you to some new methods coming to Sets soon that will make them even more useful, but before, let’s remember the basics.
To create a new set we only need to use the constructor. We can optionally pass any iterator, such as an array or a string, and the iterated items will become elements of the new set (repeated items will be ignored).
const emptySet = new Set();
const prefilledSet = new Set(['
Since Lifion’s inception as ADP’s next-generation Human Capital Management (HCM) platform, we’ve made an effort to embrace relevant technology trends and advancements. From microservices and container orchestration frameworks to distributed databases, and everything in between, we’re continually exploring ways we can evolve our architecture. Our readiness to evaluate non-traditional, cutting edge technology has meant that some bets have stuck whereas others have pivoted.
One of our biggest pivots has been a shift from self-managed databases & streaming systems, running on cloud compute services (like Amazon EC2) and deployed with tools like Terraform and Ansible, towards fully cloud-managed services.
When we launched the effort to make this shift in early 2018, we began by executing a structured, planned initiative across an organization of 200+ engineers. After overcoming the initial inertia, the effort continued to gain momentum, eventually taking a life of its own, and finally becoming fully embedded in how our teams work.
Along the way, we’ve been thinking about what we can give back. For example, we’ve previously written about a node.js client for AWS Kinesis that we’re working on as an open source initiative.
AWS’s re:Invent conference is perhaps the largest global cloud community conference in the world. In late 2018, we presented our cloud transformation journey at re:Invent. As you can see in the recording, we described our journey and key learnings in adopting specific AWS managed services.
In this post, we discuss key factors that made the initiative successful, its benefits in our microservice architecture, and how managed services helped us shift our teams’ focus to our core product while improving overall reliability.
The notion of services sharing databases, making direct connections to the same database system and being dependent on shared schemas, is a recognized micro-service anti-pattern. With shared databases, changes in the underlying database (including schemas, scaling operations such as sharding, or even migrating to a better database) become very difficult with coordination required between multiple service teams and releases.
As Amazon.com CTO Werner Vogels writes in his blog:
Each service encapsulates its own data and presents a hardened API for others to use. Most importantly, direct database access to the data from outside its respective service is not allowed. This architectural pattern was a response to the scaling challenges that had challenged Amazon.com through its first 5 years…
And Martin Fowler on integration databases:
On the whole integration databases lead to serious problems becaue [sic] the database becomes a point of coupling between the applications that access it. This is usually a deep coupling that significantly increases the risk involved in changing those applications and making it harder to evolve them. As a result most software architects that I respect take the view that integration databases should be avoided.
Applying the database per service principal means that, in practice, service teams have significant autonomy in selecting the right database technologies for their purposes. Among other factors, their data modeling, query flexibility, consistency, latency, and throughput requirements will dictate technologies that work best for them.
Up to this point, all is well — every service has isolated its data. However, when architecting a product with double digit domains, several important database infrastructure decisions need to be made:
When we first started building out our services, we had a sprawl of supporting databases, streaming, and queuing systems. Each of these technologies was deployed on AWS EC2, and we were responsible for the full scope of managing this infrastructure: from the OS level, to topology design, configuration, upgrades and backups.
It didn’t take us long to realize how much time we were spending on managing all of this infrastructure. When we made the bet on managed services, several of the decisions we’d been struggling with started falling into place:
On our Lifion engineering blog, we’ve previously written about our Lifion Developer Platform Credos. One of these speaks to the evolutionary nature of our work:
When we started adopting managed services, we went for drop-in replacements first (for example, Aurora MySQL is wire compatible with the previous MySQL cluster we were using). This approach helped us to get some early momentum while uncovering dimensions like authentication, monitoring, and discoverability that would help us later.
Our evolutionary architecture credo helped to ensure that the transition would be smooth for our services and our customers. Each deployment was done as a fully online operation, without customer impact. We recognize that we will undergo more evolutions, for which we intend to follow the same principles.
There is nothing quite like the existential dread of realizing you may have a performance issue. Months, possibly years, worth of codebase are suddenly now failing you in a live environment. Reproduction is difficult, and even if you can reproduce it on your dev machine your usual methods of debugging are likely useless. Sure, you can litter your code with entirely too much analytics, or leverage Node’s own profiling, but this results in a data hose that takes a deep understanding of Node, V8, and data science itself to make sense of it. This is a rather dire situation.
A common scenario in any service is you may have some asynchronous operation that you need to perform on a data set of unknown length. Your first instinct may look something like this:
You quickly discover that this is not very fast. Simple interrogation of CPU utilization will show Node is spending a lot of time waiting. Depending on your lint rules, you may not even be able to get this code past a CI build. The solution for many is to simply fire all the promises then await all of them. That, in and of itself, is not a bad thing; for example, say you know that every time you execute a given route, you need to make 3 unrelated calls to other services. Since you know it will always create 3 promises, this is bounded and safe. The problems arise when the promise creation is potentially unbounded.
You test your code and discover it is much faster, Node is not spending time waiting around, and the lint error is gone. You merge and deploy your code, all the integration tests pass, and you go home to watch The Expanse. Suddenly, right when the Rocinante (a legitimate salvage) is entering a tense battle, you start getting PagerDuty alerts. Your services containers are getting restarted constantly due to Out of Memory exceptions, the container host is experiencing socket exhaustion, and your response times are through the roof. You have a major performance and scalability issue.
This is a nice clean scenario, so the suspect code is easily identified, but what if it isn’t? Why does this code fail to scale? This is where Clinic.js shines. For the sake of exercising Clinic.js, I created 6 scenarios that all perform the same asynchronous work some n times using different strategies. They are using for…of, the new ES2018 for await…of, the Promise.all() scenario described above, using Bluebird.map() with a concurrency limit set, the promise-limit module, and the p-limit module.
For the test, I start a clustered web server that accepts a get and post request. Both verbs will perform some arbitrary work before completing the request. In the test itself, I get the data, do arbitrary work, and post the results back. In both cases the work is primarily parsing JSON and manipulating the object in an attempt to simulate a common Node workload. Each test run does this work 500 times, for each element in a mock collection. For the limiting modules, they are all set to allow 10 concurrent promises. All tests were performed using Node v12.14.0. The average execution times over 10 test runs can be seen below.
╔═══════════════╦══════════════════════════════════╗ ║ Test ║ Average Execution Time (Seconds) ║ ╠═══════════════╬══════════════════════════════════╣ ║ await-for-of ║ 6.943 ║ ║ bluebird-map ║ 4.550 ║ ║ for-of ║ 6.745 ║ ║ p-limit ║ 4.523 ║ ║ promise-all ║ 4.524 ║ ║ promise-limit ║ 4.457 ║ ╚═══════════════╩══════════════════════════════════╝
The results here certainly reflect the performance improvement when going from for…of to issuing concurrent promises, but the limiting libraries have similar execution times. Let’s dive deeper into what is going on.
In the for await…of and for..of scenarios (respectively), the CPU utilization is bouncing around and is averaging well below 100%. This means the Node process is spending a lot of time waiting around. The execution time is also non-ideal at roughly 7 seconds averaged over 10 runs. Clinic detects this and reports a potential I/O wait issue bottlenecking the process. It recommends running a ‘bubbleprof’ to better identify the issue.
Note the light blue segment in the graph, this represents network I/O waits. It is spending quite a bit of time waiting for network operations. Perhaps we can speed this up by issuing them concurrently.
In the Promise.all() scenario, execution time has improved dramatically to 4.5 seconds on average. The CPU is now being efficiently utilized. However, notice the memory usage spiking up much higher with incredibly high event loop delay. Clinic detects this issue and reports a potential event loop issue. Why is this?
As you can see, the promise callback is invoked first, then it executes to the end of the block. Only after that do the `then` blocks fire. This, however, is only part of the story.
In the case of the Promise.all() code above, it will create a request for every item in the collection before any `then` block is resolved. This is true even if the early requests complete before it reaches the end of the collection. It will not process the responses until the event loop ticks over. This is due to I/O callbacks not being invoked until the event loop ticks into its I/O phase. It is only at this point `.then()`, or the code block proceeding `await`, is executed.
Ultimately the creation of an unbounded number of promises will result in all of the response objects becoming queued in memory and waiting for the event loop to tick over. This can swamp the Node process with excessive memory allocations, or worse, cause out of memory events. When these objects go out of scope, it will freeze up when the garbage collector executes as it is primarily a blocking, O(N) operation. It also means when the event loop finally does tick over, all of the `.then()` callbacks of completed requests fire synchronously. If this contains heavy work, the event loop will be stuck in the callback phase until this work is completed or another truly asynchronous operation is encountered. This means any events, inbound requests, timers, etc. will not be handled until the ‘.then()’ callbacks are all completed. The biggest concern here is that response times can spike to several seconds. To solve this, we need to limit the number of concurrent promises created.
As indicated in the graphs, Bluebird.map() with a concurrency limit pretty much eliminates the event loop blocking and the high memory usage while maintaining efficient CPU utilization. It matches the performance of Promise.all(), completing the test in 4.5s. Bluebird however has lost its performance edge in recent versions of V8 when compared to native promises and is a rather heavy complete Promise replacement. Let’s consider some other native promise options.
The promise-limit module also resolves our issue, and against matches the performance of Promise.all() at 4.5s. This is due to the aforementioned pressure on Node’s memory system creating unnecessary memory allocations and complex garbage collection events. The overhead of this module does get flagged by Clinic.js as creating some event loop blocking. It notably also is nowhere near as popular as the last module, p-limit.
The p-limit module has the least amount of event loop blocking of all the options, while providing the same performance as Promise.all() at 4.5s. Given the popularity compared to promise-limit and the performance of this module, this is the clear winner.
Clinic.js enables introspection into how your Node application is performing beyond simple top line execution times. It introspects not only your resource utilization, but what the code itself is doing. By no means have I explored all the capabilities of this tool here, however it grants us insight into why a specific codepath may appear to perform without issue on its own, but wreak havoc inside of a live service. NearForm presented this to us during Node Day 2019 and it has quickly become an indispensable debugging tool for us. Its ease of use and clear presentation helps not only quickly identify problematic code, but also convey the “why” to others who are not deep into the debugging process with you. It was the obvious choice to illustrate the issues with unbounded promise creation.