CloudShark Support

Best Practices for running CloudShark

Deploying CloudShark in a Production Environment

At its core, CloudShark is a CentOS or RedHat Linux system, and this allows system administrators to bring their own tools for maintenence.

The system is initially provided in a very open configuration, and should be deployed into a protected network during the trial and early configuration phases. When being pushed into a production environment, there are some procedures that are generally accepted as ‘best practices’.

Lock down access

The ‘cloudshark’ system user (as opposed to the web user) is created by the CloudShark installation process. CloudShark’s system calls are run as this user and it is imperative that the user account persist. The default password is set to ‘cloudshark’, and it is recommended that this user account password be changed.

The web interface, which is the primary interface to CloudShark, is shipped with a single ‘admin’ user, password ‘cloudshark’. This, too should be changed by the administrator.

It is up to the initial configuration by the administrator whether to install remote services such as SSH. While this is beyond the scope of this document, it is notable that this will expose remote access to cloudshark system account and that this is perhaps the single largest justification for locking the account down.

TCP Ports which CloudShark uses

CloudShark’s listening ports are TCP 80 and 443. If you wish to lock down access to CloudShark for only certain IP address ranges, the system is a standard Linux kernel with iptables filtering.

CloudShark provides HTTP and HTTPS access by default. The HTTPS certificate is self-signed since there is no way to provide a signed certificate in the default distribution. This will allow users to connect over an encrypted channel, but without the benefits of identity verification. You can replace the self-signed certificates with your own on the Enable HTTPS on a CloudShark Appliance page.

During the installation of CloudShark, a firewall entry is added dynamically to allow access to the CloudShark IP address for HTTP/HTTPS, and also a specific entry to allow access to the special localhost interface to ensure a service called memcached is available for use.


Memcached can be configured to accept connections on only the loopback interface. If your firewall already has a default policy of deny, this setting should not have any additional effect. Security conscious administrators may wish to lock this service down at the program level, as well. After installing CloudShark edit the memcached configuration file, /etc/sysconfig/memcached and change the OPTIONS line to only listen on the loopback interface. CloudShark also does not use memcached over UDP so this can be disabled as well:

OPTIONS="-l localhost -U 0"

After editing this file run this command to restart memcached for this change to take effect:

service memcached restart

Updated March 3rd, 2018

This memcached configuration was updated to disable memcached over UDP. Allowing this could lead to memcached being used in a DDoS attack as described in this post.

Update Servers

Follow our knowledge base article to lock down access to only the servers required to update CloudShark and the underlying operating system.

SQL Server

The SQL server that is installed with CloudShark by default listens for connections on all interface. Administrators may wish to lock this down to only allow connections over a local socket by adding the line skip-networking to the [mysqld] section of the /etc/my.cnf configuration file:


After making changes to this file restart the SQL server by running service mysqld restart on CentOS 6 or systemctl restart mariadb on CentOS 7 as root to make the changes take effect.

Log files

CloudShark writes logging information to a number of log files. For information on these files and instructions on how to rotate these log files see our knowledge base article on the CloudShark log files.



It is a good idea to run a time synchronization service such as NTP to keep your system’s clock as close to the shared system time as possible. Packet capture files have microsecond precision and a clock skew of even half a second can cause difficulties correlating between resources.

We recommend following the CentOS NTP Configuration page to learn which strategy is best for your system. Note that the ntpdate package may need to be installed using yum to enable NTP. To install this package run the following command as root:

# yum install ntpdate


You may wish to adjust the timezone of your system to match your locale. This has the effect of aligning the expected values inside the packet capture files with the original source of your local capture devices.

The timezone is typically set when you first install the operating system, but for those who have come from an OVA Virtual Machine install, there’s an easy way to change it.

For example, if you are located near New York, USA, you can set your timezone to ‘America/New_York’ as follows:

su -
mv /etc/localtime /etc/localtime.old
ln -s /usr/share/zoneinfo/America/New_York /etc/localtime

For a full list of available timezones, you can browse /usr/share/zoneinfo/ or use this handy list on Wikipedia.

Staying up to date

Operating System

CentOS and RHEL publish regular security updates and we recommend they are applied as they become available. We have yet to see a security update conflict with CloudShark and we test these updates generally the same day they are published.

That said, it is always a good practice to take regular backups of any computer, both automatically and also right before any new software is installed. There is a section later in this document detailing a CloudShark backup and restoration.

To view the list of uninstalled security related upgrades, run

yum --security check-update

To install these packages:

yum --security update

CloudShark Upgrades

The cloudshark-admin utility can connect to the CloudShark Lounge and check for upgrade entitlements. Example:

# cloudshark-admin --info
CloudShark Lounge username?
CloudShark Lounge password? (invisible input):
The version of CloudShark installed on this system is 1.9.1507.
Version 1.9.1528 is available for upgrade.

If your system is offline, you can still manually check for upgrades by logging into the CloudShark Lounge from another system.

Check our release notes for changes between CloudShark version numbers.

Please refer to our Upgrade Instructions to learn this process.


CloudShark stores packet capture files, user account information, and private RSA certificate keys for decryption. These are all stored on the system in various directories and databases, and they must be factored into your security policies for your organization.

Capture File Storage

Changing the Default Storage Directory

It is easy to change the default directory that CloudShark uses to store capture files.


Changes to the default storage directory should be made before any captures are uploaded to your CloudShark system. Existing capture files will need to be imported into CloudShark again following any storage path changes. This procedure should only be done during your intial configuration.

  • Create a new directory. We’ll call ours /cap. Grant ownership to the cloudshark user and group:
mkdir /cap
chown cloudshark.cloudshark /cap

Do not adjust the directory location by manually editing the cloudshark.conf file. This is not supported.

You must create a symbolic link inside the /var/www/cloudshark/shared/uploads directory to your new storage directory.

mv /var/www/cloudshark/shared/uploads /var/www/cloudshark/shared/uploads.original
ln -s /cap /var/www/cloudshark/shared/uploads

This replaces the CloudShark uploads directory immediately. No restart is required.

To make sure that this storage location is available before the CloudShark service starts you can add this as a requirement to the service. First run systemctl edit cloudshark.service and add a line similar to the one below:


Replace /cap in the line above with the directory your CloudShark instance is storing it’s captures. Then run systemctl daemon-reload to apply the change and systemctl cat cloudshark.service to verify. The output of the cat command should look similar to:

# /usr/lib/systemd/system/cloudshark.service
Description=Top level CloudShark process monitor mariadb.service memcached.service

ExecStartPre=/usr/bin/rm -rf /run/blkid/
ExecStartPre=/bin/sh -c "/usr/sbin/blkid $(cat /usr/cloudshark/etc/blkid_dev)"
ExecStart=/usr/cloudshark/ruby/bin/ruby /usr/cloudshark/ruby/bin/god -c


# /etc/systemd/system/cloudshark.service.d/override.conf

External Storage

A common scenario is to mount a remote file system on a CloudShark host, to improve the size, speed, and robustness of the system storage. The section above details how to change the default storage directory for the system’s capture files.

NFS does not support the inotify system call, which allows a system to be alerted of file system changes. Because the CloudShark Autoimporter relies on this system call to operate, it is not possible to mount a remote NFS share as an AutoImport target. Files that end up published to the remote NFS share do not trigger the event and thus, no files are imported.

An NFS file system which is exported from CloudShark does not have this limitation because the file events are still processed by the same system.

Regular Maintenence Schedules

Automatic Pruning of Older Results

Many CloudShark administrators desire that older capture files be automatically removed from their systems. This is easy to implement using the CloudShark HTTP API. We have published an example of a CloudShark Auto-prune script.


We do not recommend a specific strategy for performing data backups. Most administrators already have a strategy they are comfortable with, and given CloudShark runs on a standard CentOS/RedHat base, there is a good chance their strategy will be compatible.

We can, however, enumerate the specific items most critical to restoring CloudShark:

  • The entire /var/www/cloudshark directory. Please note that /var/www/cloudshark/shared directory will be at least as large as the size of all the packet capture files on the system. Also, this is the location of the private keys, so please be security conscious about how these assets are handled.
  • The MySQL database cloudshark
  • /usr/cloudshark is installed by a YUM repository on the initial CloudShark installation. There is no user data stored in here, and you can always reinstall from the repository at any time, subject to a current license agreement.
  • /home/cloudshark is the cloudshark user’s home directory. Some system utility configuration files are stored in this directory, typically as dotfiles.
  • /etc is a standard linux directory with timezone settings, local account passwords, and other configurations.

About CloudShark

CloudShark is made by QA Cafe, a technology company based in Portsmouth, NH. Our passion for packet captures has grown out of our other product CDRouter.

Get in touch via our Contact us page or by following us on your favorite service: