Calvin Goodale, Developer - Blog
c g o o d a a l e . . c o m

Hack Reactor | SDC | Journal Entry 11

Overview
Implementing nginx load-balancing for my node servers by running 4 ec2 instances, each with copy of my node server and splitting requests between these servers based on nginx configuration options.

Challenge/Motivation
Our task / challenge during this sprint was to implement 2 optimizations on our api servers and analyze the results. My second choice for optimimization, after caching, was load balancing. My suspicion was that it would not have as great an effect on optimization as caching did, but I was hoping that it would still allow me to get another few thousand RPS while staying under SLA of 2000ms and 0.1% errors.

Actions Taken
I created 3 more ec2 instances, installed git, nvm, and npm, cloned down my node api to each of them and installed pm2 for persistent servers. Once I had these 3 other instances up and running (which was relatively simple, since all I had to do was create clones of my first instance and clone down the repo) I went back into my first instance where my nginx proxy was running and added some load-balancing configuration.

log_format upstreamlog '$server_name to: $upstream_addr {$request} ' 'upstream_response_time $upstream_response_time'
' request_time $request_time';

upstream my_http_servers {
    least_conn;
    server 172.31.48.138:3030;
    server 172.31.54.21:3030;
    server 172.31.52.228:3030;
    server 172.31.59.44:3030;
}

I added a custom log file for my load balancing so I could make sure it was working properly, and then adding the 4 routes for my API servers into the upstream (using the private IP addresses from my ec2 instances). I decided to use the least_conn nginx option, which each request to the server that has the least amount of connections, as this seemed optimal.

Results Observed
So far, I was able to get up to 6500 RPS while staying under the SLA, which is nearly a 1000 RPS improvement over just using caching.



I'm betting that I will be able to improve it even more after some research and changing my nginx configuration. Right now it is the error rate that is going higher than 0.1% as I get closer to 7000 RPS - the response time seems to be fine. I'll be doing some analysis this week on what could be causing the error rate to spike and trying to find some tweaks that could help with this.


0 Comments

Hack Reactor | SDC | Journal Entry 10

 

Overview
Adding nginx caching to node / mongo api servers

Challenge/Motivation
The first optimization I made to my node server was adding caching via nginx to save the API responses to disk and response with the cached responses. I was confident that caching would have a significant increase in performance, based on my research and discussions with team members / cohort mates. 

Actions Taken
I created a new nginx conf file called cache.conf and added my cache configuration entries:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=custom_cache:10m inactive=60m;


I set the path on my disk to store the cache, and added "levels", which configures nginx to use 2 directories for the cache data, as having a ton of data in a single file can make access to the data slower. keys_zone creates a place in memory for a copy of the keys, which allows nginx to determine if a cache is a hit or a miss without checking the disk. I set the time limit to keeping the cache in memory to 10 minutes. inactive=60m tells nginx how long to keep responses in the cache. 

proxy_cache custom_cache;       
proxy_cache_revalidate on;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass         http://my_http_servers;

proxy_cache actives the cache detailed at the top of the file. proxy_cache_revalidate allows nginx to use expired cached responses (presuming that they haven't been modified). I added the header X-cache so I could check the headers to see if my cache was working, and then mapped the my server IP to the proxy server using proxy_pass.

Results Observed
After warming up my cache, I was able to get around 5700 RPS with my now cache-enabled API server.


My bet is that this optimization will be the best bang for my buck, but I'm hoping to squeeze out a few more RPS with load balancing as well.


0 Comments

Hack Reactor | SDC | Journal Entry 9

Overview
Stress-testing the deployed app

Challenge/Motivation
Now that the server and database are both deployed and connected on AWS, our next task was to stress test our routes until the point of failure (pre-optimization) which according to our SLA is a failure rate > 1% or a response time > 2000ms. 

Actions Taken
I decided to use loader.io for testing the routes of my deployed app.

  1. Create a loader.io account
  2. Download the verification file
  3. Upload the file to the server and create a GET route specifically for the verification file
  4. Verify the file
  5. Create some initial tests: 1 RPS, 100 RPS, 500 RPS, 1000 RPS
  6. Determine the RPS number that puts the request routes above SLA

Results Observed
For the deployed server, I was able to get up to 725 RPS before going above the 2000ms response time (but did not have any errors), as seen here:

Next steps will be to decide on optimization methods, such as caching, load balancing, payload compression, and other strategies.


0 Comments

Hack Reactor | SDC | Journal Entry 8

Overview
Deploying my node server and mongo database to AWS

Challenge/Motivation
Our task was to deploy both our node API server and mongo db to separate ec2 instances on AWS, and then connect them in order to deploy the SDC app.

Actions Taken

  1. Create two ec2 t2.micro instances on the same VPC
  2. Add the proper security groups for each
    • Both: Allow SSH traffic from my IP
    • Both: Allow Inbound traffic from all IPs on port 80 (http)
    • Both: Allow outbound traffic from all IPs for both on all ports
    • Node instance: Allow inbound traffic from all IPs for port 3030 (the port my server is running on)
    • Node instance: Allow inbound traffic from all IPs for port 443 (https)
    • Mongo instance: Allow inbound traffic from all IPs for port 27017
  3. SSH into the node ec2 instance, install git and nvm.
  4. Change the mongoose connection IP to the public DNS for my mongo ec2 instance.
  5. Clone down the git repository into the node ec2 instance.
  6. Run npm install.
  7. Use mongo export locally to export my mongodb data into a bson file.
  8. SSH into the mongo ec2 instance and install mongo if not already installed. Create admin user with password and enable authentication in the mongod.conf file.
  9. Bind the IP 0.0.0.0 in the mongod.conf file to allow all IPs.
  10. Create the /data/db directory and make mongod the owner.
  11. SCP into my mongo ec2 instance and upload the bson file I just created.
  12. From the mong ec2 instance terminal, run mongorestore on the bson file to import the data.
  13. Test the mongo connection using MongoCompass.
  14. Run the node server
  15. Visit the node server public dns and port in the browser

Results Observed
I was able to get my server and database deployed to AWS and connected to each other. Next steps are to do some testing on the deployed API and optimizations.


0 Comments

Hack Reactor | SDC | Journal Entry 7

Overview
Docker set up for express server and mongo

Challenge/Motivation
I decided to get some practice in with Docker locally

Actions Taken

  1. Create a network for both containers to use:
    docker network create network-sdc
  2. Create the mongodb container:
    sudo docker run --name mongo-sdc -d --network network-sdc -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=<password> -v <your-local-mongodb-data-path>:/data/db mongo
  3. Create the node server Docker image:
    sudo docker build . -t node-sdc
  4. Create the node server Docker container:
    sudo docker run --name node-sdc -p 49160:3030 --network network-sdc -d -e DB_PASSWORD='<db-password-in-env-file>' node-sdc
  5. Check that both containers are running:
    sudo docker containers ls -a
  6. Check log file for node server container:
    sudo docker logs node-sdc
  7. Curl to an API server route to make sure expected data is returned:
    curl -i localhost:49160/products/1

Results Observed
The most challenging parts were figuring out how to enable authentication for the mongo database in the context of Docker (which required initializing the database with with a root user and password) and making sure to add the password from my .env file intot he container, and also figuring out that I needed both containers to be in the same network.


0 Comments