Understand how slow queries impact any business and why it's essential to address them?

We have seen the impact of not fixing Slow Queries over some time or developers denying it is not an issue that causes a lot of business impacts.

  1. During a traffic surge due to security incidents or marketing campaigns, database resources are choked. Impacting application servers to autoscale as they hold open connections longer than expected.
  2. The cost to host active data and application servers in peak condition increases cloud bills from 30% to 300% based on our experience with various e-commerce brands we work with.
  3. Read more about how it impacts  saleshttps://www.fastcompany.com/1825005/how-one-second-could-cost-amazon-16-billion-sales

Hence, there was a lousy customer experience and a drop in revenue.


Boosting Web Response Time: Enhancing MongoDB Queries for Improved Performance

MongoDB's performance is crucial for modern applications. However, slow queries can significantly impact user experience and resource utilization. Thankfully, MongoDB provides built-in features like profiling and slow query logs to identify and address performance bottlenecks. In this blog, we'll guide you through implementing slow query logs on MongoDB, giving you the power to optimize your database performance.

Understanding Slow Queries:

Before diving in, let's define a "slow query." Generally, it's a query that takes longer than a predefined threshold to execute. While the threshold depends on your needs, anything above 100ms is often considered slow. These queries can consume excessive resources, leading to sluggish database performance and impacting other users.

Enabling Slow Query Logs:

MongoDB offers two main options for slow query logging:

  1. Query Profiling: This built-in feature collects detailed information about every query's execution, including its duration, operations, and scanned documents. While powerful, it can generate significant log volume, impacting performance.

  2. Slow Query Log: This option lets you define a threshold and log only queries exceeding that time. It offers a good balance between detailed information and performance impact.

Set Up Mongo Slow Query Monitoring Using Ansible

First, create a directory with the choice of your name: 

mkdir /path/ssh_ansbile (replace folder name)

In the above directory, make a folder where we will keep our playbooks. This directory has been created because, in the future, you can place all of your playbooks in the same folder if you need to make more playbooks to perform different actions.

mkdir playbook
cd playbook 

Inside the playbook, make a new file, mongo.yml, and paste the following code: 

vim mongo.yml
---
- name: Transfer and execute a script.
hosts: all
become: true
gather_facts: false
tasks:
- name: Run a script
ansible.builtin.script: ../script.sh
register: result

- name: Show result
debug:
msg: "{ { result.stdout } }"

In the root directory of our project, i.e., mongo_ansible, make a new file ansible.cfg. This file will have all the configuration of our ansible script, which we require to make it run. 

vim ansible.cfg
[defaults] 
inventory = ./inventory.yml
host_key_checking = False
remote_user = username_of_instances
deprecation_warnings=False
private_key_file = /path/to/pemfile    

The above code inventory.yml contains all the IPs of machines where MongoDB is running, and we need to enable slow query logs. So, we will create the same inventory.yml in our project directory. 

vim inventory.yml
all:
hosts:
children:
mongo:
hosts:
0.0.0.0:

Remember to replace the 0.0.0.0 IP, the worldwide IP, with the machine IP in which your MongoDB is running. You can add multiple IPs in the format below so that one script runs inside all the MongoDB machines. 

all:
hosts:
children:
mongo:
hosts:
0.0.0.0:
127.0.0.1:

Our SSH process has been completed, and we will now perform step 2. 

Now, we will write our main script.sh inside the same folder directory, where we will mention the commands our script will perform inside the Mongodb machine after a successful SSH action.

vim script.sh
#!/bin/bash
source ./.env
# Use the variables in your script
echo "Username: $USERNAME"
echo "Password: $PASSWORD"
db_name=`echo -e "show dbs;" | mongo -u $USERNAME -p $PASSWORD --authenticationDatabase admin --quiet | awk '{print $1}' | grep -v config | grep -v admin | grep -v local`
echo $db_name
while IFS= read -r line; do
echo -e "use $line\ndb.setProfilingLevel(1, 100);"| mongo -u $USERNAME -p $PASSWORD --authenticationDatabase admin --quiet
done <<< $db_name

As our username and password are sensitive pieces of information and somehow any developer should push that to any public repo, we will keep our username and password and the env file for which we need to create .env as below in the same project directory: 

vim .env
​#!/bin/bash
set -a
export USERNAME=your_username
export PASSWORD='your_password'
set +a

Now, here are all the coding part is over, so we will run the playbook via the following command: 

ansible-playbook -l mongo playbooks/mongo.yml

Note: Please ensure Ansible is already present in the machine from where you were the script. 

You can easily download Ansible according to your operating system and check your system compatibility. 

Accessing and Analyzing Logs:

Slow query logs are typically stored in the diagnostic.slowOpLog Collection within your database. You can access and analyze them using:

  • Command-line tools: Use the shell and commands like to query the logs.
  • Visualization tools: Leverage tools like Kibana or MongoDB Charts to visualize and analyze large volumes of log data.

Optimizing Queries:

Now that you have the data, the real work begins! Analyze the slow query logs to identify problematic queries based on execution time, operations performed and scanned documents. Here are some optimization strategies:

  • Indexing: Create appropriate indexes on frequently used fields to improve query speed.
  • Query Optimization: Review the query structure and use efficient operators and filters.
  • Explain Queries: Use the method to understand how MongoDB executes specific queries.

Continuous optimization results in increased backend query efficiency and enhanced infrastructure reliability.

Conclusion:

Integrating slow query logs within MongoDB enables proactive identification and resolution of performance bottlenecks. By analyzing and optimizing slow queries, you guarantee a seamless and responsive database environment for your users. It's crucial to consistently monitor and optimize MongoDB performance. However, addressing slowness and response time isn't solely about these measures. Consider scheduling a call with an infrastructure optimization expert to enhance your application's scalability further.