Quantcast
Channel: Severalnines - security
Viewing all 75 articles
Browse latest View live

Free Open Source Database Deployment & Monitoring with ClusterControl Community Edition

$
0
0

The ClusterControl Community Edition is a free-to-use, all-in-one database management system that allows you to easily deploy and monitor the top open source database technologies like MySQL, MariaDB, Percona, MongoDB, PostgreSQL, Galera Cluster and more. It also allows you to import and monitor your existing database stack.

Free Database Deployment

The ClusterControl Community Edition ensures your team can easily and securely deploy production-ready open source database stacks that are built using battle-tested, proven methodologies. You don’t have to be a database expert to utilize the ClusterControl Community Edition - deploying the most popular open sources databases is easy with our point-and-click interface. Even if you are a master of deploying databases, ClusterControl’s point-and-click deployments will save you time and ensure your databases are deployed correctly, removing the chance for human error. There is also a CLI for those who prefer the command line, or need to integrate with automation scripts.

The ClusterControl Community Edition is not restricted to a single database technology and supports the major flavors and versions. With it you’re able to apply point-and-click deployments of MySQL standalone, MySQL replication, MySQL Cluster, Galera Cluster, MariaDB, MariaDB Cluster, Percona XtraDB and Percona Server for MongoDB, MongoDB itself and PostgreSQL!

Free Database Monitoring

The ClusterControl Community Edition makes monitoring easy by providing you the ability to look at all your database instances across multiple data centers or drill into individual nodes and queries to pinpoint issues. Offering a high-level, multi-dc view as well as a deep-dive view, ClusterControl lets you keep track of your databases so you can keep them running at peak performance.

In addition to monitoring the overall stack and node performance you can also monitor the specific queries to identify potential errors that could affect performance and uptime.

Why pay for a monitoring tool when the ClusterControl Community Edition gives you a great one for free!

Free Database Developer Studio

The Developer Studio provides you a set of monitoring and performance advisors to use and lets you create custom advisors to add security and stability to your database infrastructures. It lets you extend the functionality of ClusterControl, which helps you detect and solve unique problems in your environments.

We even encourage our users to share the advisors they have created on GitHub by adding a fork to our current advisor bundle. If we like them and think that they might be good for other users we’ll include them in future ClusterControl releases.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Why Should I Use the ClusterControl Community Edition?

These are just a few of the reasons why you should use ClusterControl as your system to deploy and monitor your open source database environments…

  • You can deploy knowing you are using proven methodologies and industry best practices.
  • If you are just getting started with open source database technology ClusterControl makes it easy for the beginner to deploy and monitor your stacks removing human error and saving you time.
  • If you are not familiar with orchestration programs like Puppet and Chef? Don’t worry! The ClusterControl Community Edition uses a point-and-click GUI to make it easy to get your environment production-ready.
  • The ClusterControl Community Edition gives you deployment and monitoring in one battle-tested all-in-one system. Why use one tool for scripting only to use a different tool for monitoring?
  • If you are not sure what database technology is right for your application? The ClusterControl Community Edition supports nearly two dozen database versions that you can try.
  • Have a load balancer running on an existing stack? With the ClusterControl Community Edition you can import and deploy your existing and already configured load balancer to run alongside your database instances.

If you are ready to give it a try click here to download and install the latest version of ClusterControl. Each install comes with the option to activate a 30-day enterprise trial as well.


Announcing ClusterControl 1.5.1 - Featuring Backup Encryption for MySQL, MongoDB & PostgreSQL

$
0
0

What better way to start a new year than with a new product release?

Today we are excited to announce the 1.5.1 release of ClusterControl - the all-inclusive database management system that lets you easily deploy, monitor, manage and scale highly available open source databases - and load balancers - in any environment: on-premise or in the cloud.

ClusterControl 1.5.1 features encryption of backups for MySQL, MongoDB and PostgreSQL, a new topology viewer, support for MongoDB 3.4, several user experience improvements and more!

Feature Highlights

Full Backup and Restore Encryption for these supported backup methods

  • mysqldump, xtrabackup (MySQL)
  • pg_dump, pg_basebackup (PostgreSQL)
  • mongodump (MongoDB)

New Topology View (BETA) shows your replication topology (including load balancers) for your entire cluster to help you visualize your setup.

  • MySQL Replication Topology
  • MySQL Galera Topology

Improved MongoDB Support

  • Support for MongoDB v3.4
  • Fix to add back restore from backup
  • Multiple NICs support. Management/public IPs for monitoring connections and data/private IPs for replication traffic

Misc

Improved user experience featuring a new left-side navigation that includes:

  • Global settings breakout to make it easier to find settings related to a specific feature
  • Quick node actions that allow you to quickly perform actions on your node
ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

View Release Details and Resources

Improving Database Security: Backup & Restore Encryption

ClusterControl 1.5 introduces another step to ensuring your databases are kept secure and protected.

Backup & restore encryption means that backups are encrypted at rest using AES-256 CBC algorithm. An auto generated key will be stored in the cluster's configuration file under /etc/cmon.d. The backup files are transferred in encrypted format. Users can now secure their backups for offsite or cloud storage with the flip of a checkbox. This feature is available for select backup methods for MySQL, MongoDB & PostgreSQL.

New Topology View (beta)

This exciting new feature provides an “overhead” topology view of your entire cluster, including load balancers. While in beta, this feature currently supports MySQL Replication and Galera topologies. With this new feature, you can drag and drop to perform node actions. For example, you can drag a replication slave on top of a master node - which will prompt you to either rebuild the slave or change the replication master.

Improved User Experience

The new Left Side Navigation and the new quick actions and settings that accompany it mark the first major redesign to the ClusterControl interface in some time. ClusterControl offers a vast array of functionality, so much so that it can sometimes be overwhelming to the novice. This addition of the new navigation allows the user quick access to what they need on a regular basis and the new node quick actions lets users quickly run common commands and requests right from the navigation.

Download the new ClusterControl or request a demo.

MongoDB Security - Resources to Keep NoSQL DBs Secure

$
0
0

We’ve almost become desensitized to the news. It seems that every other day there is a data breach at a major enterprise resulting in confidential customer information being stolen and sold to the highest bidder.

Data breaches rose by 40% in 2016 and once all the numbers are calculated, 2017 is expected to blow that number out of the water. Yahoo announced the largest breach in history in 2017, other companies like Xbox, Verizon, Equifax, and more also announced major breeches.

Because of the 2017 MongoDB Ransomware Hack, security for MongoDB is hot on everyone's minds.

We decided to pull together some of our top resources that you can use to ensure your MongoDB instances remain secure.

Here are our most popular and relevant resources on the topic of MongoDB Security…

ClusterControl & MongoDB Security

Data is the lifeblood of your business. Whether it’s protecting confidential client data or securing your own IP your business could be doomed should critical data get into the wrong hands. ClusterControl provides many advanced deployment, monitoring and management features to ensure your databases and their data are secure. Learn how!

How to Secure MongoDB with ClusterControl - The Webinar

In March of 2017, at the height of the MongoDB ransomware crisis, we hosted a webinar to talk about how you can keep MongoDB secure using ClusterControl. With authentication disabled by default in MongoDB, learning how to secure MongoDB becomes essential. In this webinar we explain how you can improve your MongoDB security and demonstrate how this is automatically done by ClusterControl.

Using the ClusterControl Developer Studio to Stay Secure

In our blog “MongoDB Tutorial: Monitoring and Securing MongoDB with ClusterControl Advisors” we demonstrated nine of the advisors from our repository for MongoDB that can assist with MongoDB security.

Audit Logging for MongoDB

In our blog “Preemptive Security with Audit Logging for MongoDB” we show that having access to an audit log would have given those affected by the ransom hack the ability to perform pre-emptive measures. The audit log is one of the most underrated features of MongoDB Enterprise and Percona Server for MongoDB. We will uncover its secrets in this blog post.

The 2017 MongoDB Ransom Hack

In January of 2017 thousands of MongoDB servers were held for ransom simply because they were deployed without basic authentication in place. In our first blog on the ransome hack, “Secure MongoDB and Protect Yourself from the Ransom Hack” we explain what happened and some simple steps to keep your data safe. In the second blog, “How to Secure MongoDB from Ransomware - Ten Tips” we went further showing even more things you could do to make sure your MongoDB instances are secure.

The Importance of Automation for MongoDB Security

Severalnines CEO Vinay Joosery shares with us the blog “How MongoDB Database Automation Improves Security” and discusses how the growing number of cyberattacks on open source database deployments highlights the industry’s poor administrative and operational practices. This blog explores how database automation is the key to keeping your MongoDB database secure.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl for MongoDB

Users of MongoDB often have to work with a variety of tools to achieve their requirements; ClusterControl provides an all-inclusive system where you don’t have to cobble together different tools.

ClusterControl offers users a single interface to securely manage their MongoDB infrastructures and mixed open source database environments, while preventing vendor lock-in; whether on premise or in the cloud. ClusterControl offers an alternative to other companies who employ aggressive pricing increases, helping you to avoid vendor lock-in and control your costs.

ClusterControl provides the following features to deploy and manage your MongoDB stacks...

  • Easy Deployment: You can now automatically and securely deploy sharded MongoDB clusters or Replica Sets with ClusterControl’s free community version; as well as automatically convert a Replica Set into a sharded cluster if that’s required.
  • Single Interface: ClusterControl provides one single interface to automate your mixed MongoDB, MySQL, and PostgreSQL database environments.
  • Advanced Security: ClusterControl removes human error and provides access to a suite of security features automatically protecting your databases from hacks and other threats.
  • Monitoring: ClusterControl provides a unified view of all sharded environments across your data centers and lets you drill down into individual nodes.
  • Scaling: Easily add and remove nodes, resize instances, and clone your production clusters with ClusterControl.
  • Management: ClusterControl provides management features that automatically repair and recover broken nodes, and test and automate upgrades.
  • Advisors: ClusterControl’s library of Advisors allows you to extend the features of ClusterControl to add even more MongoDB management functionality.
  • Developer Studio: The ClusterControl Developer Studio lets you customize your own MongoDB deployment to enable you to solve your unique problems.

To learn more about the exciting features we offer for MongoDB click here or watch this video.

How to Secure Your Open Source Databases with ClusterControl

$
0
0

Security is one of the most important aspects of running a database. Whether you are a developer or a DBA, if you are managing the database, it is your responsibility to safeguard your data and protect it from any kind of unauthorized access. The unfortunate fact is that many organizations do not protect their data, as we’ve seen from the new wave of MongoDB ransomware attacks in September 2017. We had earlier published a blog on how to secure MongoDB databases.

In this blog post, we’ll have a look into how to secure your databases using ClusterControl. All of the features described here are available in version 1.5.1 of ClusterControl (released on December 23, 2017). Please note that some features are only available for certain database types.

Backup Encryption

ClusterControl 1.5.1 introduced a new feature called backup encryption. All encrypted backups are marked with a lock icon next to it:

You can use this feature on all backup methods (mysqldump, xtrabackup, mongodump, pg_dump) supported by ClusterControl. To enable encryption, simply toggle on the "Enable Encryption" switch when scheduling or creating the backup. ClusterControl automatically generates a key to encrypt the backup. It uses AES-256 (CBC) encryption algorithm and performs the encryption on-the-fly on the target server. The following command shows an example of how ClusterControl performs a mysqldump backup:

$ mysqldump --defaults-file=/etc/my.cnf --flush-privileges --hex-blob --opt --no-create-info --no-data --triggers --routines --events --single-transaction --skip-comments --skip-lock-tables --skip-add-locks --databases db1 | gzip -6 -c | openssl enc -aes-256-cbc -pass file:/var/tmp/cmon-094508-e0bc6ad658e88d93.tmp | socat - TCP4:192.168.55.170:9999'

You would see the following error if you tried to decompress an encrypted backup without decrypting it first with the proper key:

$ gunzip mysqldump_2018-01-03_175727_data.sql.gz
gzip: mysqldump_2018-01-03_175727_data.sql.gz: not in gzip format

The key is stored inside the ClusterControl database, and can be retrieved from the cmon_backup.metadata file for a particular backup set. It will be used by ClusterControl when performing restoration. Encrypting backups is highly recommended, especially when you want to secure your backups offsite like archiving them in the cloud.

MySQL/PostgreSQL Client-Server Encryption

Apart from following the recommended security steps during deployment, you can increase the reliability of your database service by using client-server SSL encryption. Using ClusterControl, you can perform this operation with simple point and click:

You can then retrieve the generated keys and certificates directly from the ClusterControl host under /var/lib/cmon/ca path to establish secure connections with the database clients. All the keys and certificates can be managed directly under Key Management, as described further down.

Database Replication Encryption

Replication traffic within a Galera Cluster can be enabled with just one click. ClusterControl uses a 2048-bit default key and certificate generated on the ClusterControl node, which is transferred to all the Galera nodes:

A cluster restart is necessary. ClusterControl will perform a rolling restart operation, taking one node at a time. You will see a green lock icon next to the database server (Galera indicates Galera Replication encryption, while SSL indicates client-server encryption) in the Hosts grid of the Overview page once encryption is enabled:

All the keys and certificates can be managed directly under Key Management, as described further down.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Key Management

All the generated keys and certificates can be managed directly from the ClusterControl UI. Key Management allows you to manage SSL certificates and keys that can be provisioned on your clusters:

If the certificate has expired, you can simply use the UI to generate a new certificate with proper key and Certificate Authority (CA), or import an existing key and certificate into ClusterControl host.

Security Advisors

Advisors are mini-programs that run in ClusterControl. They perform specific tasks and provide advice on how to address issues in areas such as performance, security, log management, configuration, storage space and others. Each advisor can be scheduled like a cron job, and run as a standalone executable within the ClusterControl UI. It can also be run via the ClusterControl 's9s' command line client.

ClusterControl enables two security advisors for MySQL-based systems:

  • Access from any host ('%') - Identifies all users that use a wildcard host from the mysql system table, and lets you have more control over which hosts are able to connect to the servers.
  • Check number of accounts without a password - Identifies all users who do not have a password in the mysql system table.

For MongoDB, we have the following advisors:

  • MongoDB authentication enabled - Check whether the MongoDB instance is running with authentication mode enabled.
  • Authorization check - Check whether MongoDB users are authorized with too permissive role for access control.

For more details on how does ClusterControl performs the security checks, you can look at the advisor JavaScript-like source code under Manage -> Developer Studio. You can see the execution results from the Advisors page:

Multiple Network Interfaces

Having multiple NICs on the database hosts allows you to separate database traffic from management traffic. One network is used by the database nodes in order to communicate to each other, and this network is not exposed to any public network. The other network is used by ClusterControl, for management purposes. ClusterControl is able to deploy such a multi-network setup. Consider the following architecture diagram:

To import the above database cluster into ClusterControl, one would specify the primary IP address of the database hosts. Then, it is possible to choose the management network as well as the data network:

ClusterControl can also work in an environment without Internet access, with the databases being totally isolated from the public network. The majority of the features will work just fine. If the ClusterControl host is configured with Internet, it is also capable of cloning the database vendor's repository for the internet-less database servers. Just go to Settings (top menu) -> Repositories -> Create New Repository and set the options to fit the target database server environment:

The mirroring may take about 10 to 20 minutes depending on the internet connection, you will see the new item in the list later on. You can then pick this repository instead when scaling or deploying a new cluster, without the need for the database hosts to have any Internet connection (note that the operating system’s offline repository should be in place as well).

MySQL Users Management

The MySQL privilege system ensures that all users can perform only the operations they are allowed to. Granting is critical as you don't want to give all users complete access to your database, but you need users to have the necessary permissions to run queries and perform daily tasks.

ClusterControl provides an interactive user interface to manage the database schemas and privileges. It unifies the accounts on all MySQL servers in the cluster and simplifies the granting process. You can easily visualize the database users, so you avoid making mistakes.

As you can see in the above screenshot, ClusterControl greyed out unnecessary privileges if you only want to grant a user to a database (shopdb). "Require SSL?" is only enabled if the client/server SSL encryption is enabled while the administration privilege checkboxes are totally disabled if a specific database is defined. You can also inspect the generated GRANT statement at the bottom of the wizard, to see the statement that ClusterControl will execute to create this user. This helper looks pretty simple, but creating users and granting privileges can be error-prone.

ClusterControl also provides a list of inactive users for all database nodes in the cluster, showing off the accounts that have not been used since the last server restart:

This alerts the administrator for unnecessary accounts that exist, and that could potentially harm the server. The next step is to verify if the accounts are no longer active, and you can simply use the "Drop Selected User" option in order to remove them. Make sure you have enough database activity to ensure the list generated by ClusterControl is accurate. The longer the server uptime, the better.

Always Keep Up-to-date

For production use, it’s highly recommended for you to install the database-related packages from the vendor’s repository. Don’t rely on the default operating system repository, where the packages are usually outdated. If you are running in a cluster environment like Galera Cluster, or even MySQL Replication, you always have the choice to patch the system with minimal downtime.

ClusterControl supports automatic minor version rolling upgrade for MySQL/MariaDB with a single click. Just go to Manage -> Upgrades -> Upgrade and choose the appropriate major version for your running cluster. ClusterControl will then perform the upgrade, on one node at a time. The node will be stopped, then software will be updated, and then the node will be started again. If a node fails to upgrade, the upgrade process is aborted and the admin is notified. Upgrades should only be performed when there is as little traffic as possible on the cluster.

Major versions upgrades (e.g, from MySQL 5.6 to MySQL 5.7) are intentionally not automated. Major upgrades usually require uninstallation of the existing packages, which is a risky task to automate. Careful planning and testing is necessary for such kind of upgrades.

Database security is an important aspect of running your database in production. From all the incidents we frequently read about in the news (and there are probably many others that are not publicized), it is clear that there are groups busy out there with bad intentions. So, make sure your databases are well protected.

Ten Tips on How to Achieve MySQL and MariaDB Security

$
0
0

Security of data is a top priority these days. Sometimes it’s enforced by external regulations like PCI-DSS or HIPAA, sometimes it’s because you care about your customers’ data and your reputation. There are numerous aspects of security that you need to keep in mind - network access, operating system security, grants, encryption and so on. In this blog post, we’ll give you 10 tips on what to look at when securing your MySQL or MariaDB setup.

1. Remove users without password

MySQL used to come with a set of pre-created users, some of which can connect to the database without a password or, even worse, anonymous users. This has changed in MySQL 5.7 which, by default, comes only with a root account that uses the password you choose at installation time. Still, there are MySQL installations which were upgraded from previous versions and these installations keep the legacy users. Also, MariaDB 10.2 on Centos 7 comes with anonymous users:

MariaDB [(none)]> select user, host, password from mysql.user where user like '';
+------+-----------------------+----------+
| user | host                  | password |
+------+-----------------------+----------+
|      | localhost             |          |
|      | localhost.localdomain |          |
+------+-----------------------+----------+
2 rows in set (0.00 sec)

As you can see, those are limited only to access from localhost but regardless, you do not want to have users like that. While their privileges are limited, they still can run some commands which may show more information about the database - for example, the version may help identify further vectors of attack.

[root@localhost ~]# mysql -uanonymous_user
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 19
Server version: 10.2.11-MariaDB MariaDB Server
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> SHOW GRANTS\G
*************************** 1. row ***************************
Grants for @localhost: GRANT USAGE ON *.* TO ''@'localhost'
1 row in set (0.00 sec)
MariaDB [(none)]> \s
--------------
mysql  Ver 15.1 Distrib 10.2.11-MariaDB, for Linux (x86_64) using readline 5.1
Connection id:        19
Current database:
Current user:        anonymous_user@localhost
SSL:            Not in use
Current pager:        stdout
Using outfile:        ''
Using delimiter:    ;
Server:            MariaDB
Server version:        10.2.11-MariaDB MariaDB Server
Protocol version:    10
Connection:        Localhost via UNIX socket
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:        /var/lib/mysql/mysql.sock
Uptime:            12 min 14 sec
Threads: 7  Questions: 36  Slow queries: 0  Opens: 17  Flush tables: 1  Open tables: 11  Queries per second avg: 0.049
--------------

Please note that users with very simple passwords are almost as insecure as users without any password. Passwords like “password” or “qwerty” are not really helpful.

2. Tight remote access

First of all, remote access for superusers - this is taken care of by default when installing the latest MySQL (5.7) or MariaDB (10.2) - only local access is available. Still, it’s pretty common to see superusers being available for various reasons. The most common one, probably because the database is managed by humans who want to make their job easier, so they’d add remote access to their databases. This is not a good approach as remote access makes it easier to exploit potential (or verified) security vulnerabilities in MySQL - you don’t need to get a connection to the host first.

Another step - make sure that every user can connect to MySQL only from specific hosts. You can always define several entries for the same user (myuser@host1, myuser@host2), this should help to reduce a need for wildcards (myuser@’%’).

3. Remove test database

The test database, by default, is available to every user, especially to the anonymous users. Such users can create tables and write to them. This can potentially become a problem on its own - any writes would add some overhead and reduce database performance. Currently, after the default instalation, only MariaDB 10.2 on Centos 7 is affected by this - Oracle MySQL 5.7 and Percona Server 5.7 do not have the ‘test’ schema available.

[root@localhost ~]# mysql -uanonymous_user
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 13
Server version: 10.2.11-MariaDB MariaDB Server
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> SHOW GRANTS\G
*************************** 1. row ***************************
Grants for @localhost: GRANT USAGE ON *.* TO ''@'localhost'
1 row in set (0.00 sec)
MariaDB [(none)]> USE test;
Database changed
MariaDB [test]> CREATE TABLE testtable (a INT);
Query OK, 0 rows affected (0.01 sec)
MariaDB [test]> INSERT INTO testtable VALUES (1), (2), (3);
Query OK, 3 rows affected (0.01 sec)
Records: 3  Duplicates: 0  Warnings: 0
MariaDB [test]> SELECT * FROM testtable;
+------+
| a    |
+------+
|    1 |
|    2 |
|    3 |
+------+
3 rows in set (0.00 sec)

Of course, it may still happen that your MySQL 5.7 has been upgraded from previous versions in which the ‘test’ schema was not removed - you should take care of this and check if you have it created.

4. Obfuscate access to MySQL

It is well known that MySQL runs on port 3306, and its superuser is called ‘root’. To make things harder, it is quite simple to change this. To some extent, this is an example of security through obscurity but it may at least stop automated attempts to get access to the ‘root’ user. To change port, you need to edit my.cnf and set ‘port’ variable to some other value. As for users - after MySQL is installed, you should create a new superuser (GRANT ALL … WITH GRANT OPTION) and then remove existing ‘root@’ accounts.

5. Network security

Ideally, MySQL would be not available through the network and all connections would be handled locally, through the Unix socket. In some setups, this is possible - in that case you can add the ‘skip-networking’ variable in my.cnf. This will prevent MySQL from using any TCP/IP communication, only Unix socket would be available on Linux (Named pipes and shared memory on Windows hosts).

Most of the time though, such tight security is not feasible. In that case you need to find another solution. First, you can use your firewall to allow traffic only from specific hosts to the MySQL server. For instance, application hosts (although they should be ok with reaching MySQL through proxies), the proxy layer, and maybe a management server. Other hosts in your network probably do not need direct access to the MySQL server. This will limit possibilities of attack on your database, in case some hosts in your network would be compromised.

If you happen to use proxies which allow regular expression matching for queries, you can use them to analyze the SQL traffic and block suspicious queries. Most likely your application hosts shouldn’t run “DELETE * FROM your_table;” on a regular basis. If it is needed to remove some data, it can be executed by hand, locally, on the MySQL instance. You can create such rules using something like ProxySQL: block, rewrite, redirect such queries. MaxScale also gives you an option to block queries based on regular expressions.

6. Audit plugins

If you are interested in collecting data on who executed what and when, there are several audit plugins available for MySQL. If you use MySQL Enterprise, you can use MySQL Enterprise Audit which is an extension to MySQL Enterprise. Percona and MariaDB also have their own version of audit plugins. Lastly, McAfee plugin for MySQL can also be used with different versions of MySQL. Generally speaking, those plugins collect more or less the same data - connect and disconnect events, queries executed, tables accessed. All of this contains information about which user participated in such event, from what host it logged from, when did it happen and so on. The output can be XML or JSON, so it’s much easier to parse it than parsing general log contents (even though the data is rather similar). Such output can also be sent to syslog and, further, some sort of log server for processing and analysis.

7. Disable LOAD DATA LOCAL INFILE

If both server and client has the ability to run LOAD DATA LOCAL INFILE, a client will be able to load data from a local file to a remote MySQL server. This, potentially, can help to read files the client has access to - for example, on an application server, one could access any file that the HTTP server has access to. To avoid it, you need to set local-infile=0 in the my.cnf

8. File privileges

You have to keep in mind that MySQL security also depends on the operating system setup. MySQL stores data in the form of files. The MySQL server writes plenty of information to logs. Sometimes this information contains data - slow query log, general log or binary log, for example. You need to make sure that this information is safe and accesible only to users who have to access it. Typically it means that only the root and the user under whose rights MySQL is running, should have access to all MySQL-related files. Most of the time it’s a dedicated user called ‘mysql’. You should check MySQL configuration files and all the logs generated by MySQL and verify that they are not readable by other users.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

9. SSL and Encryption of Data in Transit

Preventing people from accessing configuration and log files is one thing. The other issue is to make sure data is securely transferred over the network. With an exception of setups where all the clients are local and use Unix socket to access MySQL, in majority of cases, data which forms a result set for a query, leaves the server and is transferred to the client over the network. Data can also be transferred between MySQL servers, for example via standard MySQLreplication or within a Galera cluster. Network traffic can be sniffed, and through those means, your data would be exposed.

To prevent this from happening, it is possible to use SSL to encrypt traffic, both server and client-side. You can create an SSL connection between a client and a MySQL server. You can also create an SSL connection between your master and your slaves, or between the nodes of a Galera cluster. This will ensure that all data that is transferred is safe and cannot be sniffed by an attacker who gained access to your network.

The MySQL documentation covers in detail how to setup SSL encryption. If you find it too cumbersome, ClusterControl can help you deploy a secure environment for MySQL replication or Galera cluster in a couple of clicks:

10. Encryption of Data at Rest

Securing data in transit using SSL encryption only partially solves the problem. You need to take care also of data at rest - all the data that is stored in the database. Data at rest encryption can also be a requirement for security regulations like HIPAA or PCI DSS. Such encryption can be implemented on multiple levels - you can encrypt the whole disk on which the files are stored. You can encrypt only the MySQL database through functionality available in the latest versions of MySQL or MariaDB. Encryption can also be implemented in the application, so that it encrypts the data before storing it in the database. Every option has its pros and cons: disk encryption can help only when disks are physically stolen, but the files would not be encrypted on a running database server. MySQL database encryption solves this issue, but it cannot prevent access to data when the root account is compromised. Application level encryption is the most flexible and secure, but then you lose the power of SQL - it’s pretty hard to use encrypted columns in WHERE or JOIN clauses.

All flavors of MySQL provide some sort of data at rest encryption. Oracle’s MySQL uses Transparent Data Encryption to encrypt InnoDB tablespaces. This is available in the commercial MySQL Enterprise offering. It provides an option to encrypt InnoDB tablespaces, other files which also store data in some form (for example, binary logs, general log, slow query log) are not encrypted. This allows the toolchain (MySQL Enterprise Backup but also xtrabackup, mysqldump, mysqlbinlog) to work correctly with such setup.

Starting from MySQL 5.7.11, the community version of MySQL also got support for InnoDB tablespace encryption. The main difference compared to the enterprise offering is the way the keys are stored - keys are not located in a secure vault, which is required for regulatory compliance. This means that starting from Percona Server 5.7.11, it is also possible to encrypt InnoDB tablespace. In the recently published Percona Server 5.7.20, support for encrypting binary logs has been added. It is also possible to integrate with Hashicorp Vault server via a keyring_vault plugin, matching (and even extending - binary log encryption) the features available in Oracle’s MySQL Enterprise edition.

MariaDB added support for data encryption in 10.1.3 - it is a separate, enhanced implementation. It gives you the possibility to not only encrypt InnoDB tablespaces, but also InnoDB log files. As a result, data is more secure but some of the tools won’t work in such configuration. Xtrabackup will not work with encrypted redo logs - MariaDB created a fork, MariaDB Backup, which adds support for MariaDB encryption. There are also issues with mysqlbinlog.

No matter which MySQL flavor you use, as long as it is a recent version, you would have options to implement data at rest encryption via the database server, making sure that your data is additionally secured.

Securing MySQL or MariaDB is not trivial, but we hope these 10 tips will help you along the way.

ClusterControl Tips & Tricks: Securing your MySQL Installation (Updated)

$
0
0

Requires ClusterControl 1.2.11 or later. Applies to MySQL based clusters.

During the life cycle of Database installation it is common that new user accounts are created. It is a good practice to once in a while verify that the security is up to standards. That is, there should at least not be any accounts with global access rights, or accounts without password.

Using ClusterControl, you can at any time perform a security audit.

In the User Interface go to Manage > Developer Studio. Expand the folders so that you see s9s/mysql/programs. Click on security_audit.js and then press Compile and Run.

If there are problems you will clearly see it in the messages section:

Enlarged Messages output:

Here we have accounts that can connect from any hosts and accounts which do not have a password. Those accounts should not exist in a secure database installation. That is rule number one. To correct this problem, click on mysql_secure_installation.js in the s9s/mysql/programs folder.

Click on the dropdown arrow next to Compile and Run and press Change Settings. You will see the following dialog and enter the argument “STRICT”:

Then press Execute. The mysql_secure_installation.js script will then do on each MySQL database instance part of the cluster:

  1. Delete anonymous users
  2. Dropping 'test' database (if exists).
  3. If STRICT is given as an argument to mysql_secure_installation.js it will also do:
    • Remove accounts without passwords.

In the Message box you will see:

The MySQL database servers part of this cluster have now been secured and you have reduced the risk of compromising your data.

You can re-run security_audit.js to verify that the actions have had effect.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

New Video - Ten Tips to Secure MySQL & MariaDB

$
0
0

This video, based on last weeks blog “Ten Tips to Achieve MySQL and MariaDB Security”, walks you through ten different items to keep in mind when deploying a MySQL or MariaDB database to production.

Database security is an essential part of any system. With more and more news reports of widespread data breaches coming in from around the world, there is no better time to check your environments and make sure you have implemented these basic steps to remain secure.

ClusterControl for Database Security

ClusterControl provides advanced deployment, monitoring and management features to ensure your databases and their data are secure. It ensures that your open source database deployments always adhere to basic security model setups for each technology.

ClusterControl provides the Package Summary Operational Report that shows you how many technology and security patches are available to upgrade and can even execute the upgrades for you!

In addition ClusterControl offers…

  • Secure Deployments
    Every technology has its own unique security features and ClusterControl ensures that what should be enabled is enabled during deployment. This eliminates the risk of human error which could otherwise result in leaving the database vulnerable because of a security setting oversight.
  • Communication Security
    ClusterControl provides the ability to install a purchased or self-signed SSL certificate to encrypt the communications between the server and the client. Replication traffic within a Galera Cluster can also be encrypted. Keys for these certificates are entered into and managed by ClusterControl.
  • Backup Security
    Backups are encrypted at rest using AES-256 CBC algorithm. An auto generated key will be stored in the cluster's configuration file under /etc/cmon.d. The backup files are transferred in encrypted format. Users can now secure their backups for offsite or cloud storage with the flip of a checkbox. This feature is available for select backup methods for MySQL, MongoDB & PostgreSQL.
  • User Management
    ClusterControl’s advanced user management features allow you to restrict read or write access to your data at the database or table level. ClusterControl also provides advisors that check that all of your users have proper passwords, and even comes with checks to make sure any part of your database is not open to the public.
  • Reports & Auditing
    ClusterControl provides reporting and audit tools to ensure you remain compliant, whether it is to an industry standard or to your own requirements. It also provides several Developer Studio Advisors that check your database environment to ensure that it is secure. You can even create your own security advisors to automate your own best practices. In addition, several Operational Reports found in ClusterControl can provide you with information you need to know to ensure your database environment is secure.

Download ClusterControl today to take advantage of these database security features.

How to Secure the ClusterControl Server

$
0
0

In our previous blog post, we showed you how you can secure your open source databases with ClusterControl. But what about the ClusterControl server itself? How do we secure it? This will be the topic for today’s blog. We assume the host is solely for ClusterControl usage, with no other applications running on it.

Firewall & Security Group

First and foremost, we should close down all unnecessary ports and only open the necessary ports used by ClusterControl. Internally, between ClusterControl and the database servers, only the netcat port matters, where the default port is 9999. This port needs to be opened only if you would like to store the backup on the ClusterControl server. Otherwise, you can close this down.

From the external network, it's recommended to only open access to either HTTP (80) or HTTPS (443) for the ClusterControl UI. If you are running the ClusterControl CLI called 's9s', the CMON-TLS endpoint needs to be opened on port 9501. It's also possible to install database-related applications on top of the ClusterControl server, like HAProxy, Keepalived, ProxySQL and such. In that case, you also have to open the necessary ports for these as well. Please refer to the documentation page for a list of ports for each service.

To setup firewall rules via iptables, on ClusterControl node, do:

$ iptables -A INPUT -p tcp --dport 9999 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 80 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 443 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 9501 -j ACCEPT
$ iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT

The above are the simplest commands. You can be stricter and extend the commands to follow your security policy - for example, by adding network interface, destination address, source address, connection state and what not.

Similar to running the setup in the cloud, the following is an example of inbound security group rules for the ClusterControl server on AWS:

Different cloud providers provide different security group implementations, but the basic rules are similar.

Encryption

ClusterControl supports encryption of communications at different levels, to ensure the automation, monitoring and management tasks are performed as securely as possible.

Running on HTTPS

The installer script (install-cc.sh) will configure by default a self-signed SSL certificate for HTTPS usage. If you choose this access method as the main endpoint, you can block the plain HTTP service running on port 80 from the external network. However, ClusterControl still requires an access to CMONAPI (a legacy Rest-API interface) which runs by default on port 80 on the localhost. If you would like to block the HTTP port all over, make sure you change the ClusterControl API URL under Cluster Registrations page to use HTTPS instead:

The self-signed certificate configured by ClusterControl has 10 years (3650 days) of validity. You can verify the certificate validity by using the following command (on CentOS 7 server):

$  openssl x509 -in /etc/ssl/certs/s9server.crt -text -noout
...
        Validity
            Not Before: Apr  9 21:22:42 2014 GMT
            Not After : Mar 16 21:22:42 2114 GMT
...

Take note that the absolute path to the certificate file might be different depending on the operating system.

MySQL Client-Server Encryption

ClusterControl stores monitoring and management data inside MySQL databases on the ClusterControl node. Since MySQL itself supports client-server SSL encryption, ClusterControl is a capable of utilizing this feature to establish encrypted communication with MySQL server when writing and retrieving its data.

The following configuration options are supported for this purpose:

  • cmondb_ssl_key - path to SSL key, for SSL encryption between CMON and the CMON DB.
  • cmondb_ssl_cert - path to SSL cert, for SSL encryption between CMON and the CMON DB
  • cmondb_ssl_ca - path to SSL CA, for SSL encryption between CMON and the CMON DB

We covered the configuration steps in this blog post some time back.

There is a catch though. At the time of writing, the ClusterControl UI has a limitation in accessing CMON DB through SSL using the cmon user. As a workaround, we are going to create another database user for the ClusterControl UI and ClusterControl CMONAPI called cmonui. This user will not have SSL enabled on its privilege table.

mysql> GRANT ALL PRIVILEGES ON *.* TO 'cmonui'@'127.0.0.1' IDENTIFIED BY '<cmon password>';
mysql> FLUSH PRIVILEGES;

Update the ClusterControl UI and CMONAPI configuration files located at clustercontrol/bootstrap.php and cmonapi/config/database.php respectively with the newly created database user, cmonui :

# <wwwroot>/clustercontrol/bootstrap.php
define('DB_LOGIN', 'cmonui');
define('DB_PASS', '<cmon password>');
# <wwwroot>/cmonapi/config/database.php
define('DB_USER', 'cmonui');
define('DB_PASS', '<cmon password>');

These files will not be replaced when you perform an upgrade through package manager.

CLI Encryption

ClusterControl also comes with a command-line interface called 's9s'. This client parses the command line options and sends a specific job to the controller service listening on port 9500 (CMON) or 9501 (CMON with TLS). The latter is the recommended one. The installer script by default will configure s9s CLI to use 9501 as the endpoint port of the ClusterControl server.

Role-Based Access Control

ClusterControl uses Role-Based Access Control (RBAC) to restrict access to clusters and their respective deployment, management and monitoring features. This ensures that only authorized user requests are allowed. Access to functionality is fine-grained, allowing access to be defined by organisation or user. ClusterControl uses a permissions framework to define how a user may interact with the management and monitoring functionality, after they have been authorised to do so.

The RBAC user interface can be accessed via ClusterControl -> User Management -> Access Control:

All of the features are self-explanatory but if you want some additional description, please check out the documentation page.

If you are having multiple users involved in the database cluster operation, it's highly recommended to set access controls for them accordingly. You can also create multiple teams (organizations) and assign them with zero or more clusters.

Running on Custom Ports

ClusterControl can be configured to use custom ports for all the dependant services. ClusterControl uses SSH as the main communication channel to manage and monitor nodes remotely, Apache to serve the ClusterControl UI and also MySQL to store monitoring and management data. You can run these services on custom ports to reduce the attacking vector. The following ports are the usual targets:

  • SSH - default is 22
  • HTTP - default is 80
  • HTTPS - default is 443
  • MySQL - default is 3306

There are several things you have to change in order to run the above services on custom ports for ClusterControl to work properly. We have covered this in details in the documentation page, Running on Custom Port.

Permission and Ownership

ClusterControl configuration files hold sensitive information and should be kept discreet and well protected. The files must be permissible to user/group root only, without read permission to others. In case the permission and ownership have been wrongly set, the following command helps restore them back to the correct state:

$ chown root:root /etc/cmon.cnf /etc/cmon.d/*.cnf
$ chmod 700 /etc/cmon.cnf /etc/cmon.d/*.cnf

For MySQL service, ensure the content of the MySQL data directory is permissible to "mysql" group, and the user could be either "mysql" or "root":

$ chown -Rf mysql:mysql /var/lib/mysql

For ClusterControl UI, the ownership must be permissible to the Apache user, either "apache" for RHEL/CentOS or "www-data" for Debian-based OS.

The SSH key to connect to the database hosts is another very important aspect, as it holds the identity and must be kept with proper permission and ownership. Furthermore, SSH won't permit the usage of an unsecured key file when initiating the remote call. Verify the SSH key file used by the cluster, inside the generated configuration files under /etc/cmon.d/ directory, is set to the permissible to the osuser option only. For example, consider the osuser is "ubuntu" and the key file is /home/ubuntu/.ssh/id_rsa:

$ chown ubuntu:ubuntu /home/ubuntu/.ssh/id_rsa
$ chmod 700 /home/ubuntu/.ssh/id_rsa

Use a Strong Password

If you use the installer script to install ClusterControl, you are encouraged to use a strong password when prompted by the installer. There are at most two accounts that the installer script will need to configure (depending on your setup):

  • MySQL cmon password - Default value is 'cmon'.
  • MySQL root password - Default value is 'password'.

It is user's responsibility to use strong passwords in those two accounts. The installer script supports a bunch special characters for your password input, as mentioned in the installation wizard:

=> Set a password for ClusterControl's MySQL user (cmon) [cmon]
=> Supported special password characters: ~!@#$%^&*()_+{}<>?

Verify the content of /etc/cmon.cnf and /etc/cmon.d/cmon_*.cnf and ensure you are using a strong password whenever possible.

Changing the MySQL 'cmon' Password

If the configured password does not satisfy your password policy, to change the MySQL cmon password, there are several steps that you need to perform:

  1. Change the password inside the ClusterControl's MySQL server:

    $ ALTER USER 'cmon'@'127.0.0.1' IDENTIFIED BY 'newPass';
    $ ALTER USER 'cmon'@'{ClusterControl IP address or hostname}' IDENTIFIED BY 'newPass';
    $ FLUSH PRIVILEGES;
  2. Update all occurrences of 'mysql_password' options for controller service inside /etc/cmon.cnf and /etc/cmon.d/*.cnf:

    mysql_password=newPass
  3. Update all occurrences of 'DB_PASS' constants for ClusterControl UI inside /var/www/html/clustercontrol/bootstrap.php and /var/www/html/cmonapi/config/database.php:

    # <wwwroot>/clustercontrol/bootstrap.php
    define('DB_PASS', 'newPass');
    # <wwwroot>/cmonapi/config/database.php
    define('DB_PASS', 'newPass');
  4. Change the password on every MySQL server monitored by ClusterControl:

    $ ALTER USER 'cmon'@'{ClusterControl IP address or hostname}' IDENTIFIED BY 'newPass';
    $ FLUSH PRIVILEGES;
  5. Restart the CMON service to apply the changes:

    $ service cmon restart # systemctl restart cmon

Verify if the cmon process is started correctly by looking at the /var/log/cmon.log. Make sure you got something like below:

2018-01-11 08:33:09 : (INFO) Additional RPC URL for events: 'http://127.0.0.1:9510'
2018-01-11 08:33:09 : (INFO) Configuration loaded.
2018-01-11 08:33:09 : (INFO) cmon 1.5.1.2299
2018-01-11 08:33:09 : (INFO) Server started at tcp://127.0.0.1:9500
2018-01-11 08:33:09 : (INFO) Server started at tls://127.0.0.1:9501
2018-01-11 08:33:09 : (INFO) Found 'cmon' schema version 105010.
2018-01-11 08:33:09 : (INFO) Running cmon schema hot-fixes.
2018-01-11 08:33:09 : (INFO) Schema auto-upgrade succeed (version 105010).
2018-01-11 08:33:09 : (INFO) Checked tables - seems ok
2018-01-11 08:33:09 : (INFO) Community version
2018-01-11 08:33:09 : (INFO) CmonCommandHandler: started, polling for commands.

Running it Offline

ClusterControl is able to manage your database infrastructure in an environment without Internet access. Some features would not work in that environment (backup to cloud, deployment using public repos, upgrades), the major features are there and would work just fine. You also have a choice to initially deploy everything with Internet, and then cut off Internet once the setup is tested and ready to serve production data.

By having ClusterControl and the database cluster isolated from the outside world, you have taken off one of the important attacking vectors.

Summary

ClusterControl can help secure your database cluster but it doesn't get secured by itself. Ops teams must make sure that the ClusterControl server is also hardened from a security point-of-view.


How to Secure Galera Cluster - 8 Tips

$
0
0

As a distributed database system, Galera Cluster requires additional security measures as compared to a centralized database. Data is distributed across multiple servers or even datacenters perhaps. With significant data communication happening across nodes, there can be significant exposure if the appropriate security measures are not taken.

In this blog post, we are going to look into some tips on how to secure our Galera Cluster. Note that this blog builds upon our previous blog post - How to Secure Your Open Source Databases with ClusterControl.

Firewall & Security Group

The following ports are very important for a Galera Cluster:

  • 3306 - MySQL
  • 4567 - Galera communication and replication
  • 4568 - Galera IST
  • 4444 - Galera SST

From the external network, it is recommended to only open access to MySQL port 3306. The other three ports can be closed down from the external network, and only allows them for internal access between the Galera nodes. If you are running a reverse proxy sitting in front of the Galera nodes, for example HAProxy, you can lock down the MySQL port from public access. Also ensure the monitoring port for the HAProxy monitoring script is opened. The default port is 9200 on the Galera node.

The following diagram illustrates our example setup on a three-node Galera Cluster, with an HAProxy facing the public network with its related ports:

Based on the above diagram, the iptables commands for database nodes are:

$ iptables -A INPUT -p tcp -s 10.0.0.0/24 --dport 3306 -j ACCEPT
$ iptables -A INPUT -p tcp -s 10.0.0.0/24 --dport 4444 -j ACCEPT
$ iptables -A INPUT -p tcp -s 10.0.0.0/24 --dports 4567:4568 -j ACCEPT
$ iptables -A INPUT -p tcp -s 10.0.0.0/24 --dport 9200 -j ACCEPT

While on the load balancer:

$ iptables -A INPUT -p tcp --dport 3307 -j ACCEPT

Make sure to end your firewall rules with deny all, so only traffic as defined in the exception rules is allowed. You can be stricter and extend the commands to follow your security policy - for example, by adding network interface, destination address, source address, connection state and what not.

MySQL Client-Server Encryption

MySQL supports encryption between the client and the server. First we have to generate the certificate. Once configured, you can enforce user accounts to specify certain options to connect with encryption to a MySQL server.

The steps require you to:

  1. Create a key for Certificate Authority (ca-key.pem)
  2. Generate a self-signed CA certificate (ca-cert.pem)
  3. Create a key for server certificate (server-key.pem)
  4. Generate a certificate for server and sign it with ca-key.pem (server-cert.pem)
  5. Create a key for client certificate (client-key.pem)
  6. Generate a certificate for client and sign it with ca-key.pem (client-cert.pem)

Always be careful with the CA private key (ca-key.pem) - anybody with access to it can use it to generate additional client or server certificates that will be accepted as legitimate when CA verification is enabled. The bottom line is all the keys must be kept discreet.

You can then add the SSL-related variables under [mysqld] directive, for example:

ssl-ca=/etc/ssl/mysql/ca-cert.pem
ssl-cert=/etc/ssl/mysql/server-cert.pem
ssl-key=/etc/ssl/mysql/server-key.pem

Restart the MySQL server to load the changes. Then create a user with the REQUIRE SSL statement, for example:

mysql> GRANT ALL PRIVILEGES ON db1.* TO 'dbuser'@'192.168.1.100' IDENTIFIED BY 'mySecr3t' REQUIRE SSL;

The user created with REQUIRE SSL will be enforced to connect with the correct client SSL files (client-cert.pem, client-key.pem and ca-cert.pem).

With ClusterControl, client-server SSL encryption can easily be enabled from the UI, using the "Create SSL Encryption" feature.

Galera Encryption

Enabling encryption for Galera means IST will also be encrypted because the communication happens via the same socket. SST, on the other hand, has to be configured separately as shown in the next section. All nodes in the cluster must be enabled with SSL encryption and you cannot have a mix of nodes where some have enabled SSL encryption, and others not. The best time to configure this is when setting up a new cluster. However, if you need to add this on a running production system, you will unfortunately need to rebootstrap the cluster and there will be downtime.

All Galera nodes in the cluster must use the same key, certificate and CA (optional). You could also use the same key and certificate created for MySQL client-server encryption, or generate a new set for this purpose only. To activate encryption inside Galera, one has to append the option and value under wsrep_provider_options inside the MySQL configuration file on each Galera node. For example, consider the following existing line for our Galera node:

wsrep_provider_options = "gcache.size=512M; gmcast.segment=0;"

Append the related variables inside the quote, delimited by a semi-colon:

wsrep_provider_options = "gcache.size=512M; gmcast.segment=0; socket.ssl_cert=/etc/mysql/cert.pem; socket.ssl_key=/etc/mysql/key.pem;"

For more info on the Galera's SSL related parameters, see here. Perform this modification on all nodes. Then, stop the cluster (one node at a time) and bootstrap from the last node that shut down. You can verify if SSL is loaded correctly by looking into the MySQL error log:

2018-01-19T01:15:30.155211Z 0 [Note] WSREP: gcomm: connecting to group 'my_wsrep_cluster', peer '192.168.10.61:,192.168.10.62:,192.168.10.63:'
2018-01-19T01:15:30.159654Z 0 [Note] WSREP: SSL handshake successful, remote endpoint ssl://192.168.10.62:53024 local endpoint ssl://192.168.10.62:4567 cipher: AES128-SHA compression:

With ClusterControl, Galera Replication encryption can be easily enabled using the "Create SSL Galera Encryption" feature.

SST Encryption

When SST happens without encryption, the data communication is exposed while the SST process is ongoing. SST is a full data synchronization process from a donor to a joiner node. If an attacker was able to "see" the full data transmission, the person would get a complete snapshot of your database.

SST with encryption is supported only for mysqldump and xtrabackup-v2 methods. For mysqldump, the user must be granted with "REQUIRE SSL" on all nodes and the configuration is similar to standard MySQL client-server SSL encryption (as described in the previous section). Once the client-server encryption is activated, create a new SST user with SSL enforced:

mysql> GRANT ALL ON *.* TO 'sst_user'@'%' IDENTIFIED BY 'mypassword' REQUIRE SSL;

For rsync, we recommend using galera-secure-rsync, a drop-in SSL-secured rsync SST script for Galera Cluster. It operates almost exactly like wsrep_sst_rsync except that it secures the actual communications with SSL using socat. Generate the required client/server key and certificate files, copy them to all nodes and specify the "secure_rsync" as the SST method inside the MySQL configuration file to activate it:

wsrep_sst_method=secure_rsync

For xtrabackup, the following configuration options must be enabled inside the MySQL configuration file under [sst] directive:

[sst]
encrypt=4
ssl-ca=/path/to/ca-cert.pem
ssl-cert=/path/to/server-cert.pem
ssl-key=/path/to/server-key.pem

Database restart is not necessary. If this node is selected by Galera as a donor, these configuration options will be picked up automatically when Galera initiates the SST.

SELinux

Security-Enhanced Linux (SELinux) is an access control mechanism implemented in the kernel. Without SELinux, only traditional access control methods such as file permissions or ACL are used to control the file access of users.

By default, with strict enforcing mode enabled, everything is denied and the administrator has to make a series of exceptions policies to the elements of the system require in order to function. Disabling SELinux entirely has become a common poor practice for many RedHat based installation nowadays.

Depending on the workloads, usage patterns and processes, the best way is to create your own SELinux policy module tailored for your environment. What you really need to do is to set SELinux to permissive mode (logging only without enforce), and trigger events that can happen on a Galera node for SELinux to log. The more extensive the better. Example events like:

  • Starting node as donor or joiner
  • Restart node to trigger IST
  • Use different SST methods
  • Backup and restore MySQL databases using mysqldump or xtrabackup
  • Enable and disable binary logs

One example is if the Galera node is monitored by ClusterControl and the query monitor feature is enabled, ClusterControl will enable/disable the slow query log variable to capture the slow running queries. Thus, you would see the following denial in the audit.log:

$ grep -e denied audit/audit.log | grep -i mysql
type=AVC msg=audit(1516835039.802:37680): avc:  denied  { open } for  pid=71222 comm="mysqld" path="/var/log/mysql/mysql-slow.log" dev="dm-0" ino=35479360 scontext=system_u:system_r:mysqld_t:s0 tcontext=unconfined_u:object_r:var_log_t:s0 tclass=file

The idea is to let all possible denials get logged into the audit log, which later can be used to generate the policy module using audit2allow before loading it into SELinux. Codership has covered this in details in the documentation page, SELinux Configuration.

SST Account and Privileges

SST is an initial syncing process performed by Galera. It brings a joiner node up-to-date with the rest of the members in the cluster. The process basically exports the data from the donor node and restores it on the joiner node, before the joiner is allowed to catch up on the remaining transactions from the queue (i.e., those that happened during the syncing process). Three SST methods are supported:

  • mysqldump
  • rsync
  • xtrabackup (or xtrabackup-v2)

For mysqldump SST usage, the following privileges are required:

  • SELECT, SHOW VIEW, TRIGGER, LOCK TABLES, RELOAD, FILE

We are not going to go further with mysqldump because it is probably not often used in production as SST method. Beside, it is a blocking procedure on the donor. Rsync is usually a preferred second choice after xtrabackup due to faster syncing time, and less error-prone as compared to mysqldump. SST authentication is ignored with rsync, therefore you may skip configuring SST account privileges if rsync is the chosen SST method.

Moving along with xtrabackup, the following privileges are advised for standard backup and restore procedures based on the Xtrabackup documentation page:

  • CREATE, CREATE TABLESPACE, EVENT, INSERT, LOCK TABLE, PROCESS, RELOAD, REPLICATION CLIENT, SELECT, SHOW VIEW, SUPER

However for xtrabackup's SST usage, only the following privileges matter:

  • PROCESS, RELOAD, REPLICATION CLIENT

Thus, the GRANT statement for SST can be minimized as:

mysql> GRANT PROCESS,RELOAD,REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost' IDENTIFIED BY 'SuP3R@@sTr0nG%%P4ssW0rD';

Then, configure wsrep_sst_auth accordingly inside MySQL configuration file:

wsrep_sst_auth = sstuser:SuP3R@@sTr0nG%%P4ssW0rD

Only grant the SST user for localhost and use a strong password. Avoid using root user as the SST account, because it would expose the root password inside the configuration file under this variable. Plus, changing or resetting the MySQL root password would break SST in the future.

MySQL Security Hardening

Galera Cluster is a multi-master replication plugin for InnoDB storage engine, which runs on MySQL and MariaDB forks. Therefore, standard MySQL/MariaDB/InnoDB security hardening recommendations apply to Galera Cluster as well.

This topic has been covered in numerous blog posts out there. We have also covered this topic in the following blog posts:

The above blog posts summarize the necessity of encrypting data at rest and data in transit, having audit plugins, general security guidelines, network security best practices and so on.

Use a Load Balancer

There are a number of database load balancers (reverse proxy) that can be used together with Galera - HAProxy, ProxySQL and MariaDB MaxScale to name some of them. You can set up a load balancer to control access to your Galera nodes. It is a great way of distributing the database workload between the database instances, as well as restricting access, e.g., if you want to take a node offline for maintenance, or if you want to limit the number of connections opened on the Galera nodes. The load balancer should be able to queue connections, and therefore provide some overload protection to your database servers.

ProxySQL, a powerful database reverse-proxy which understands MySQL and MariaDB, can be extended with many useful security features like query firewall, to block offending queries from the database server. The query rules engine can also be used to rewrite bad queries into something better/safer, or redirect them to another server which can absorb the load without affecting any of the Galera nodes. MariaDB MaxScale also capable of blocking queries based on regular expressions with its Database Firewall filter.

Another advantage having a load balancer for your Galera Cluster is the ability to host a data service without exposing the database tier to the public network. The proxy server can be used as the bastion host to gain access to the database nodes in a private network. By having the database cluster isolated from the outside world, you have removed one of the important attacking vectors.

That's it. Always stay secure and protected.

MySQL vs MariaDB vs Percona Server: Security Features Comparison

$
0
0

Security of data is critical for any organisation. It’s an important aspect that can heavily influence the design of the database environment. When deciding upon which MySQL flavour to use, you need to take into consideration the security features available from the different server vendors. In this blog post, we’ll come up with a short comparison of the latest versions of the MySQL Community Edition from Oracle, Percona Server and MariaDB:

mysqld  Ver 5.7.20-19 for Linux on x86_64 (Percona Server (GPL), Release 19, Revision 3c5d3e5d53c)
mysqld  Ver 5.7.21 for Linux on x86_64 (MySQL Community Server (GPL))
mysqld  Ver 10.2.12-MariaDB for Linux on x86_64 (MariaDB Server)

We are going to use Centos 7 as the operating system - please keep in mind that results we present here may be slightly different on other distributions like Debian or Ubuntu. We’d also like to focus on the differences and will not cover the commonalities - Percona Server and MariaDB are flavors of MySQL, so some of the security features (e.g., how access privileges of MySQL files look like) are shared among them.

Initial security

Users

Both Percona Server and MySQL Community Server comes with a randomly generated temporary password for the root user. You need to check the contents of MySQL’s error log to find it:

2018-01-19T13:47:45.532148Z 1 [Note] A temporary password is generated for root@localhost: palwJu7uSL,g

Once you log in, a password change is forced upon you:

[root@localhost ~]# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.21

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> select * from mysql.user;
ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.

Password has to be strong enough, this is enforced by the validate_password plugin:

mysql> alter user root@localhost identified by 'password123.';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
mysql> alter user root@localhost identified by 'password123.A';
Query OK, 0 rows affected (0.00 sec)

MariaDB does not generate a random root password and it provides passwordless access to the root account from (and only from) localhost.

[root@localhost ~]# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 12
Server version: 10.2.12-MariaDB MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SELECT CURRENT_USER();
+----------------+
| CURRENT_USER() |
+----------------+
| root@localhost |
+----------------+
1 row in set (0.00 sec)

This is not a big issue during the initial deployment phase, as the DBA is supposed to configure and secure access to the database later on (by running mysql_secure_installation for example). The bigger problem here is that a good practice is not enforced by MariaDB. If you don’t have to setup a strong password for the root user, it could be that nobody changes it later and passwordless access will remain. Then this would become a serious security threat.

Another aspect we’d like to look at is anonymous, passwordless access. Anonymous users allow anyone to get in, it doesn’t have to be a predefined user. If such access is passwordless, it means that anyone can connect to MySQL. Typically such account has only USAGE privilege but even then it is possible to print a status (‘\s’) which contains information like MySQL version, character set etc. Additionally, if ‘test’ schema is available, such user has the ability to write to that schema.

Both MySQL Community Server and Percona server do not have any anonymous users defined in MySQL:

mysql> select user, host, authentication_string from mysql.user;
+---------------+-----------+-------------------------------------------+
| user          | host      | authentication_string                     |
+---------------+-----------+-------------------------------------------+
| root          | localhost | *EB965412B594F67C8EB611810EF8D406F2CF42BD |
| mysql.session | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE |
| mysql.sys     | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE |
+---------------+-----------+-------------------------------------------+
3 rows in set (0.00 sec)

On the other hand, MariaDB is open for anonymous, passwordless access.

MariaDB [(none)]> select user,host,password from mysql.user;
+------+-----------------------+----------+
| user | host                  | password |
+------+-----------------------+----------+
| root | localhost             |          |
| root | localhost.localdomain |          |
| root | 127.0.0.1             |          |
| root | ::1                   |          |
|      | localhost             |          |
|      | localhost.localdomain |          |
+------+-----------------------+----------+
6 rows in set (0.00 sec)

In addition to that, the ‘test’ schema is available - which allows anonymous users to issue writes to the database.

[root@localhost ~]# mysql -umyanonymoususer
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 14
Server version: 10.2.12-MariaDB MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> use test;
Database changed
MariaDB [test]> CREATE TABLE mytab (a int);
Query OK, 0 rows affected (0.01 sec)

MariaDB [test]> INSERT INTO mytab VALUES (1), (2);
Query OK, 2 rows affected (0.02 sec)
Records: 2  Duplicates: 0  Warnings: 0

MariaDB [test]> SELECT * FROM mytab;
+------+
| a    |
+------+
|    1 |
|    2 |
+------+
2 rows in set (0.00 sec)

This poses a serious threat, and needs to be sorted out. Else, it can be easily exploited to attempt to overload the server with writes.

Data in transit security

MySQL Community Server and both of its forks support the use of SSL to encrypt data in transit. This is extremely important for Wide Area Networks, but also shouldn’t be overlooked in a local network. SSL can be used both client and server-side. Regarding server-side configuration (to encrypt traffic from master to slaves, for example), it looks identical across the board. There is a difference though when it comes to client-side SSL encryption, introduced in MySQL 5.7. Prior to 5.7, one had to generate SSL keys and CA’s and define them in the configurations of both server and client. This is how MariaDB’s 10.2 SSL setup looks like. In both MySQL Community Server 5.7 and in Percona Server 5.7 (which is based on MySQL 5.7), there is no need to pre-generate keys. It is all done automatically, in the background. All you need to do is to enable SSL on your client by setting the correct ‘--ssl-mode’. For MySQL’s CLI client, this is not even needed as it enables SSL by default:

[root@localhost ~]# mysql -p -h127.0.0.1
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 6
Server version: 5.7.21 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> \s
--------------
mysql  Ver 14.14 Distrib 5.7.21, for Linux (x86_64) using  EditLine wrapper

Connection id:        6
Current database:
Current user:        root@localhost
SSL:            Cipher in use is DHE-RSA-AES256-SHA
Current pager:        stdout
Using outfile:        ''
Using delimiter:    ;
Server version:        5.7.21 MySQL Community Server (GPL)
Protocol version:    10
Connection:        127.0.0.1 via TCP/IP
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
TCP port:        3306
Uptime:            2 days 21 hours 51 min 52 sec

Threads: 1  Questions: 15  Slow queries: 0  Opens: 106  Flush tables: 1  Open tables: 99  Queries per second avg: 0.000
--------------

On the other hand MariaDB would require additional configuration as SSL is disabled by default:

[root@localhost ~]# mysql -h127.0.0.1
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 18
Server version: 10.2.12-MariaDB MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> \s
--------------
mysql  Ver 15.1 Distrib 10.2.12-MariaDB, for Linux (x86_64) using readline 5.1

Connection id:        18
Current database:
Current user:        root@localhost
SSL:            Not in use
Current pager:        stdout
Using outfile:        ''
Using delimiter:    ;
Server:            MariaDB
Server version:        10.2.12-MariaDB MariaDB Server
Protocol version:    10
Connection:        127.0.0.1 via TCP/IP
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
TCP port:        3306
Uptime:            2 days 22 hours 26 min 58 sec

Threads: 7  Questions: 45  Slow queries: 0  Opens: 18  Flush tables: 1  Open tables: 12  Queries per second avg: 0.000
--------------

Data at rest encryption

First of all, backups - there are freely available backup tools like xtrabackup or MariaDB Backup (which is a fork of xtrabackup). These allow to create encrypted backups of all three MySQL flavors we discuss in this blog post.

All three flavours support encryption of the running database, but there are differences in what pieces of data are encrypted.

The MySQL Community Server supports encryption of InnoDB tablespaces only. Keys used for encryption are stored in files (which is not compliant with regulations - keys should be stored in a vault - something which MySQL Enterprise supports). Percona Server is based on MySQL Community Server, so it also supports encryption of InnoDB tablespaces. Recently, in Percona Server 5.7.20, support for encryption of general tablespaces (compared to only individual ones in previous versions and MySQL Community Edition) was added. Support for encryption of binary logs was also added. Percona Server comes with a keyring_vault plugin, which can be used to store keys in Hashicorp Vault server, making Percona Server 5.7.20 compliant with regulatory requirements regarding data at rest encryption.

MariaDB 10.2 has more advanced data-at-rest encryption support. In addition to tablespace and binary/relay log encryption, it has support for encrypting InnoDB redo logs. Currently, it is the more complete solution regarding data encryption.

Audit logging

All three MySQL flavors have support for audit logging. Their scope is pretty much comparable: connect and disconnect events, queries executed, tables accessed. The logs contain information about which user participated in such event, from what host the user logged from, the time it happened, and similar info. Such events can be also logged via syslog and stored on an external log server to enable log analysis and parsing.

Data masking, SQL firewall

All of the discussed MySQL flavors work with some kind of tool which would allow implementing data masking, and would be able to block SQL traffic based on some rules. Data masking is a method of obfuscating some data outside of the database, but before it reaches client. An example would be credit card data which is stored in plain text in the database, but when a developer wants to query such data, she will see ‘xxxxxxxx...’ instead of numbers. The tools we are talking here are ProxySQL and MaxScale. MaxScale is a product of MariaDB Corporation, and is subscription-based. ProxySQL is a free to use database proxy. Both proxies can be used with any of the MySQL flavours.

That’s all for today folks. For further reading, check out these 10 tips for securing your MySQL and MariaDB databases.

PostgreSQL Privileges & User Management - What You Should Know

$
0
0

User management within PostgreSQL can be tricky. Typically new users are managed, in concert, within a couple of key areas in the environment. Oftentimes, privileges are perfect on one front, yet configured incorrectly on the other. This blog post will provide practical 'Tips and Tricks' for a user or role, as we will come to know it, setup within PostgreSQL.

The subject areas we will focus on are:

  • PostgreSQL's Take on Roles

You will learn about roles, role attributes, best practices for naming your roles, and common role setups.

  • The pg_hba.conf file

In this section we will look at one of the key files and its settings, for client-side connections and communication with the server.

  • Database, Table, and Column level privileges and restrictions.

Looking to configure roles for optimal performance and usage? Do your tables contain sensitive data, only accessible to privileged roles? Yet with the need to allow different roles to perform limited work? These questions and more will be exposed in this section.

PostgreSQL's Take on Roles - What is a 'Role' and how to create one?

Permissions for database access within PostgreSQL are handled with the concept of a role, which is akin to a user. Roles can represent groups of users in the PostgreSQL ecosystem as well.

PostgreSQL establishes the capacity for roles to assign privileges to database objects they own, enabling access and actions to those objects. Roles have the ability to grant membership to another role. Attributes provide customization options, for permitted client authentication.

Attributes for roles through the CREATE ROLE command, are available in the official PostgreSQL documentation.

Below, are those attributes you will commonly assign when setting up a new role. Most of these are self-explanatory. However, a brief description is provided to clear up any confusion along with example uses.

SUPERUSER - A database SUPERUSER deserves a word of caution. Bottom line, roles with this attribute can create another SUPERUSER. Matter of fact, this attribute is required to create another SUPERUSER role. Since roles with this attribute bypass all permission checks, grant this privilege judiciously.

CREATEDB - Allows the role to create databases.

CREATEROLE - With this attribute, a role can issue the CREATE ROLE command. Hence, create other roles.

LOGIN - Enables the ability to login. A role name with this attribute can be used in the client connection command. More details on this attribute with forthcoming examples.

Certain attributes have an explicit polar opposite named command and typically is the default when left unspecified.

e.g.
SUPERUSER | NOSUPERUSER
CREATEROLE |NOCREATEROLE
LOGIN |NOLOGIN

Let's look at some of these attributes in action for various configurations you can set up to get going.

Creating And Dropping Roles

Creating a role is relatively straightforward. Here's a quick example:

postgres=# CREATE ROLE $money_man;
ERROR: syntax error at or near "$"
LINE 1: CREATE ROLE $money_man;

What went wrong there? Turns out, role names cannot start with anything other than a letter.

"What about wrapping the name in double quotes?" Let's see:

postgres=# CREATE ROLE "$money_man";
CREATE ROLE

That worked, though probably not a good idea. How about a special character in the middle of the name?

postgres=# CREATE ROLE money$_man;
CREATE ROLE

No problem there. Even without double quotes, no error was returned.

I'm just not fond of the name structure of $money_man for a user. I'm dropping you $money_man and starting afresh. The DROP ROLE command takes care of removing a role. Here it is in use.

postgres=# DROP ROLE $money_man;
ERROR: syntax error at or near "$"
LINE 1: DROP ROLE $money_man;

And another error with the $money_man role. Again, resorting to the double quotes it is.

postgres=# DROP ROLE "$money_man";
DROP ROLE

The LOGIN privilege

Let's look at two different users, one with the LOGIN privilege and one without. I'll assign them passwords as well.

postgres=# CREATE ROLE nolog_user WITH PASSWORD 'pass1';
CREATE ROLE
postgres=# CREATE ROLE log_user WITH LOGIN PASSWORD 'pass2';
CREATE ROLE

Note: The passwords provided to the above fictional roles are for demonstration purposes only. You should always strive to provide unique and hardened passwords when implementing roles. While a password is better than no password, a hardened password is even better than a trivial one.

Let's assign log_user the CREATEDB and CREATEROLE attributes with the ALTER ROLE command.

postgres=# ALTER ROLE log_user CREATEROLE CREATEDB;
ALTER ROLE

You can verify these set attributes, by checking the pg_role catalog. Two columns of interest are rolcreaterole and rolcreatedb. Both are of the Boolean data type so they should be set to t for true for these attributes.

Confirm with a similar SELECT query.

postgres=# SELECT rolcreaterole, rolcreatedb FROM pg_roles WHERE rolname = 'log_user';
rolcreaterole | rolcreatedb
---------------+-------------
t | t
(1 row)
ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

How can you determine the existing roles present in the database?

Two available methods are the psql \du command or selecting from the pg_roles catalog.

Here they both are in use.

postgres=> \du
List of roles
Role name | Attributes | Member of
------------+------------------------------------------------------------+-----------
log_user | Create role, Create DB | {}
nolog_user | Cannot login | {}

postgres=> SELECT rolname FROM pg_roles;
rolname
----------------------
nolog_user
log_user
(2 rows)

Logging in

Let's give both roles, an opportunity to login to the server.

psql -U nolog_user -W postgres
Password for user nolog_user:
psql: FATAL: no pg_hba.conf entry for host "[local]", user "nolog_user", database "postgres", SSL off
psql -U log_user -W postgres
Password for user log_user:
psql: FATAL: no pg_hba.conf entry for host "[local]", user "log_user", database "postgres", SSL off

To resolve this issue, we have to dig into that pg_hba.conf file. The solution is discussed as we continue in this post, to that specific section.

Actionable Takeaways

  • CREATE ROLE and its counterpart, DROP ROLE, are your go-to commands for implementing and removing roles.
  • ALTER ROLE handles changing the attributes of a role.
  • Roles are valid within all databases due to definition at the database cluster level.
  • Keep in mind, creating a role name beginning with a special character, requires you to 'address' it with double quotes.
  • Roles and their privileges are established using attributes.
  • To establish roles needing the LOGIN attribute by default, CREATE USER is an optional command at your disposal. Used in lieu of CREATE ROLE role_name LOGIN, they are essentially equal.

The pg_hba.conf file - Establishing common ground between the server and the client

Covering all aspects and settings for the pg_hba.conf file in one blog post would be daunting at best. Instead, this section will present common pitfalls you may encounter and solutions to remedy them.

Successful connections require a conjunctive effort from both parts as a whole. Roles connecting to the server, must still meet access restrictions set at the database level, after passing the settings in the pg_hba.conf file.

Relevant examples of this relationship are included as this section progresses.

To locate your pg_hba.conf file, issue a similar SELECT query, on the pg_settingsVIEW. You must be logged in as a SUPERUSER to query this VIEW.

postgres=# SELECT name, setting
FROM pg_settings WHERE name LIKE '%hba%';
name | setting
----------+-------------------------------------
hba_file | /etc/postgresql/10/main/pg_hba.conf
(1 row)

The pg_hba.conf file contains records specifying one of seven available formats for a given connection request. See the full spectrum here .

For the purpose of this blog post, we will look at settings you can use for a local environment.

Perhaps this server is for your continued learning and study (as mine is).

I must make special note that these settings are not the optimal settings for a hardened system containing multiple users.

The fields for this type of connection are:

local database user auth-method [auth-options]

Where they mean:

local - connections are attempted with Unix-domain sockets.

database - Specifies the database(s) named for this record match.

user - The database user name matched for this record. A comma-separated list of multiple users or all is allowed for this field as well.

auth-method - Is used when a connection matches this unique record. The possible choices for this field is:

  • trust
  • reject
  • scram-sha-256
  • md5
  • password
  • gss
  • sspi
  • ident
  • peer
  • ldap
  • radius
  • cert
  • pam
  • bsd

The lines set in pg_hba.conf file for roles nolog_user and log_user look like this:

local all nolog_user password
local all log_user password

Note: Since password is sent in clear text, this should not be used in untrusted environments with untrusted networks.

Let's look at three interesting columns from the pg_hba_file_rulesVIEW with the below query. Again your role needs the SUPERUSER attribute to query this VIEW.

postgres=# SELECT database, user_name, auth_method
postgres-# FROM pg_hba_file_rules
postgres-# WHERE CAST(user_name AS TEXT) LIKE '%log_user%';
database | user_name | auth_method
----------+--------------+-------------
{all} | {nolog_user} | password
{all} | {log_user} | password
(2 rows)

We can see identical information from the lines provided above found in the pg_hba.conf file as we can from the accompanying query. At first glance, it looks as if both roles can log in.

We will test and confirm.

psql -U nolog_user -W postgres
Password for user nolog_user:
psql: FATAL: role "nolog_user" is not permitted to log in
psql -U log_user -W postgres
Password for user log_user:
psql (10.1)
Type "help" for help.
postgres=>

The key point here is, although nolog_user and log_user are both able to login according to the pg_hba.conf file, only log_user is allowed to actually login.

Where log_user passed the database level access restrictions (By having the LOGIN attribute), nolog_user did not.

Let's edit log_user's line in the pg_hba.conf file and change the database name this role is allowed to access. Here is the change, indicating log_user can now login to the trial database only.

local trial log_user password

First let's try to login to the postgres database, which log_user previously had access to due to the all flag.

$ psql -U log_user -W postgres
Password for user log_user:
psql: FATAL: no pg_hba.conf entry for host "[local]", user "log_user", database "postgres", SSL off

Now with the trial database log_user does have privilege to

$ psql -U log_user -W trial
Password for user log_user:
psql (10.1)
Type "help" for help.
trial=>

No error there and the trial=> prompt shows the currently connected database.

These settings apply within the server environment as well, once a connection is established.

Let's attempt a connection to that postgres database again:

trial=> \c postgres;
Password for user log_user:
FATAL: no pg_hba.conf entry for host "[local]", user "log_user", database "postgres", SSL off
Previous connection kept

Through the examples presented here, you should be aware of the customization options for the roles in your cluster.

Note: Oftentimes, reloading the pg_hba.conf file is required for changes to take effect.

Use the pg_ctl utility to reload your server.

The syntax would be:

pg_ctl reload [-D datadir] [-s]

To know where your datadir is, you can query the pg_settings system VIEW, if logged in as a SUPERUSER with a similar SELECT query as below.

postgres=# SELECT setting FROM pg_settings WHERE name = 'data_directory';
           setting
-----------------------------
 /var/lib/postgresql/10/main
(1 row)

Then, give your shell to the postgres user (or other SUPERUSER) with:

$ sudo -u postgres bash

Unless you have added the pg_ctl utility to your $PATH, you must fully qualify it for use, then pass the command to execute, along with the datadir location.

Here is an example:

$ /usr/lib/postgresql/10/bin/pg_ctl reload -D /var/lib/postgresql/10/main
server signaled

Let’s check the server's status with:

$ /usr/lib/postgresql/10/bin/pg_ctl status -D /var/lib/postgresql/10/main
pg_ctl: server is running (PID: 1415)
/usr/lib/postgresql/10/bin/postgres "-D""/var/lib/postgresql/10/main""-c""config_file=/etc/postgresql/10/main/postgresql.conf"

Actionable takeaways

  • Roles must pass requirements from both the pg_hba.conf file and database level access privileges.
  • pg_hba.conf file is checked from the top down, for each connection request. Order in the file is significant.

Database, Table, and Column privileges and restrictions - Tailor fit roles for tasks and responsibilities

In order for roles to use database objects (tables, views, columns, functions, etc...), they must be granted access privileges to them.

The GRANT command defines these essential privileges.

We'll go over a few examples to get the essence of its use.

Creating databases

Since log_user was granted the CREATEDB and CREATEROLE attributes, we can use this role to create a test database named trial.

postgres=> CREATE DATABASE trial:
CREATE DATABASE

In addition to creating a new ROLE:

postgres=> CREATE ROLE db_user WITH LOGIN PASSWORD 'scooby';
CREATE ROLE

Finally, log_user will connect to the new trial database:

postgres=> \c trial;
Password for user log_user:
You are now connected to database "trial" as user "log_user".
trial=>

Notice the prompt changed to the name 'trial' indicating that we are connected to that database.

Let's utilize log_user to CREATE a mock table.

trial=> CREATE TABLE another_workload(
trial(> id INTEGER,
trial(> first_name VARCHAR(20),
trial(> last_name VARCHAR(20),
trial(> sensitive_info TEXT);
CREATE TABLE

Role log_user recently created a helper role, db_user. We require db_user to have limited privileges for table another_workload.

Undoubtedly, the sensitive_info column should not be accessed by this role. INSERT, UPDATE, and DELETE commands should not be granted at this time either, until db_user meets certain expectations.

However, db_user is required to issue SELECT queries. How can we limit this roles abilities within the another_workload table?

First let's examine the exact syntax found in the PostgreSQL GRANT command docs, at the table level.

GRANT { { SELECT | INSERT | UPDATE | REFERENCES } ( column_name [, ...] )
[, ...] | ALL [ PRIVILEGES ] ( column_name [, ...] ) }
ON [ TABLE ] table_name [, ...]
TO role_specification [, ...] [ WITH GRANT OPTION ]

Next, we implement the requirements set forth for role db_user, applying specific syntax.

trial=> GRANT SELECT (id, first_name, last_name) ON TABLE another_workload TO db_user;
GRANT

Notice just after the SELECT keyword, we listed the columns that db_user can access. Until changed, should db_user attempt SELECT queries on the sensitive_info column, or any other command for that matter, those queries will not be executed.

With db_user logged in, we'll put this into practice, attempting a SELECT query to return all columns and records from the table.

trial=> SELECT * FROM another_workload;
ERROR: permission denied for relation another_workload

Column sensitive_info is included in this query. Therefore, no records are returned to db_user.

But db_user can SELECT the allowable columns

trial=> SELECT id, first_name, last_name
trial-> FROM another_workload;
id | first_name | last_name
-----+------------+-----------
10 | John | Morris
191 | Jannis | Harper
2 | Remmy | Rosebuilt
(3 rows)

That works just fine.

We will test INSERT, UPDATE, and DELETE commands as well.

trial=> INSERT INTO another_workload(id,first_name,last_name,sensitive_info)
VALUES(17,'Jeremy','Stillman','key code:400Z');
ERROR: permission denied for relation another_workload
trial=> UPDATE another_workload
trial-> SET id = 101
trial-> WHERE id = 10;
ERROR: permission denied for relation another_workload
trial=> DELETE FROM another_workload
trial-> WHERE id = 2;;
ERROR: permission denied for relation another_workload

By not assigning INSERT, UPDATE, or DELETE commands to db_user, the role is denied access to using them.

With the plethora of available options, configuring your role is virtually limitless. You can make them fully functional, able to execute any command, or as constrained as your requirements dictate.

Actionable takeaways

  • Roles are provided access privileges to database objects via the GRANT command.
  • Database objects and commands against those objects, is highly configurable within the PostgreSQL environment.

Closing

Through this blog post's provided examples, you should have a better understanding of:

  1. Creating a role with specific attributes.
  2. Setting a workable connection between the client and server, allowing roles login access to databases.
  3. Highly customizing your roles to meet individual requirements for database, table, and column level access by implementing necessary attributes.

How to Secure your PostgreSQL Database - 10 Tips

$
0
0

Once you have finished the installation process of your PostgreSQL database server it is necessary to protect it before going into production. In this post, we will show you how to harden the security around your database to keep your data safe and secure.

1. Client Authentication Control

When installing PostgreSQL a file named pg_hba.conf is created in the database cluster's data directory. This file controls client authentication.

From the official postgresql documentation we can define the pg_hba.conf file as a set of records, one per line, where each record specifies a connection type, a client IP address range (if relevant for the connection type), a database name, a user name, and the authentication method to be used for connections matching these parameters.The first record with a matching connection type, client address, requested database, and user name is used to perform authentication.

So the general format will be something like this:

# TYPE  DATABASE        USER            ADDRESS                 METHOD

An example of configuration can be as follows:

# Allow any user from any host with IP address 192.168.93.x to connect
# to database "postgres" as the same user name that ident reports for
# the connection (typically the operating system user name).
#
# TYPE  DATABASE        USER            ADDRESS                 METHOD
 host     postgres              all             192.168.93.0/24         ident
# Reject any user from any host with IP address 192.168.94.x to connect
# to database "postgres
# TYPE  DATABASE        USER            ADDRESS                 METHOD
 host     postgres              all             192.168.94.0/24         reject

There are a lot of combinations you can make to refine the rules (the official documentation describes each option in detail and has some great examples), but remember to avoid rules that are too permissive, such as allowing access for lines using DATABASE all or ADDRESS 0.0.0.0/0.

For ensuring security, even if you are forgetting to add a rule, you can add the following row at the bottom:

# TYPE  DATABASE        USER            ADDRESS                 METHOD
 host     all              all             0.0.0.0/0         reject

As the file is read from top to bottom to find matching rules, in this way you ensure that for allowing permission you will need to explicitly add the matching rule above.

2. Server Configuration

There are some parameters on the postgresql.conf that we can modify to enhance security.

You can use the parameter listen_address to control which ips will be allowed to connect to the server. Here is a good practice to allow connections only from the known ips or your network, and avoid general values like “*”,”0.0.0.0:0” or “::”, which will tell PostgreSQL to accept connection from any IP.

Changing the port that postgresql will listen on (by default 5432) is also an option. You can do this by modifying the value of the port parameter.

Parameters such as work_mem, maintenance_work_mem, temp_buffer , max_prepared_transactions, temp_file_limit are important to keep in mind in case you have a denial of service attack. These are statement/session parameters that can be set at different levels (db, user, session), so managing these wisely can help us minimize the impact of the attack.

3. User and Role Management

The golden rule for security regarding user management is to grant users the minimum amount of access they need.

Managing this is not always easy and it can get really messy if not done well from the beginning.

A good way of keeping the privileges under control is to use the role, group, user strategy.

In postgresql everything is considered a role, but we are going to make some changes to this.

In this strategy you will create three different types or roles:

  • role role (identified by prefix r_)
  • group role (identified by prefix g_)
  • user role (generally personal or application names)

The roles (r_ roles) will be the ones having the privileges over the objects. The group roles ( g_ roles ) will be granted with the r_ roles , so they will be a collection of r_ roles. And finally, the user roles will be granted with one or more group roles and will be the ones with the login privilege.

Let's show an example of this. We will create a read only group for the example_schema and then grant it to a user:

We create the read only role and grant the object privileges to it

CREATE ROLE r_example_ro NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
GRANT USAGE ON SCHEMA example to r_example_ro;
GRANT SELECT ON ALL TABLES IN SCHEMA example to r_example_ro;
ALTER DEFAULT PRIVILEGES IN SCHEMA example GRANT SELECT ON TABLES TO r_example_ro;

We create the read only group and grant the role to that group

CREATE ROLE g_example_ro NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION';
GRANT r_example_ro to g_example_ro;

We create the app_user role and make it "join" the read only group

CREATE ROLE app_user WITH LOGIN ;
ALTER ROLE app_user WITH PASSWORD 'somePassword' ;
ALTER ROLE app_user VALID UNTIL 'infinity' ;
GRANT g_example_ro TO app_user;

Using this method you can manage the granularity of the privileges and you can easily grant and revoke groups of access to the users. Remember to only grant object privileges to the roles instead of doing it directly for the users and to grant the login privilege only to the users.

This is a good practice to explicitly revoke public privileges on the objects, like revoke the public access to a specific database and only grant it through a role.

REVOKE CONNECT ON my_database FROM PUBLIC;
GRANT CONNECT ON my_database TO r_example_ro;

Restrict SUPERUSER access, allow superuser connections only from localhost/unix domain.

Use specific users for different purposes, like specific app users or backup users, and limit the connections for that user only from the required ips.

4. Super User Management

Maintaining a strong password policy is a must for keeping your databases safe and avoid the passwords hacks. For a strong policy use preferentially special characters, numbers, uppercase and lowercase characters and have at least 10 characters.

There are also external authentication tools, like LDAP or PAM, that can help you ensure your password expiration and reuse policy, and also handle account locking on authentication errors.

5. Data Encryption (on connection ssl)

PostgreSQL has native support for using SSL connections to encrypt client/server communications for increased security. SSL (Secure Sockets Layer) is the standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and browsers remain private and integral.

As postgresql clients sends queries in plain-text and data is also sent unencrypted, it is vulnerable to network spoofing.

You can enable SSL by setting the ssl parameter to on in postgresql.conf.

The server will listen for both normal and SSL connections on the same TCP port, and will negotiate with any connecting client on whether to use SSL. By default, this is at the client's option, but you have the option to setup the server to require use of SSL for some or all connections using the pg_hba config file described above.

6. Data Encryption at Rest (pg_crypto)

There are two basic kinds of encryption, one way and two way. In one way you don't ever care about decrypting the data into readable form, but you just want to verify the user knows what the underlying secret text is. This is normally used for passwords. In two way encryption, you want the ability to encrypt data as well as allow authorized users to decrypt it into a meaningful form. Data such as credit cards and SSNs would fall in this category.

For one way encryption, the crypt function packaged in pgcrypto provides an added level of security above the md5 way. The reason is that with md5, you can tell who has the same password because there is no salt (In cryptography, a salt is random data that is used as an additional input to a one-way function that "hashes" data, a password or passphrase), so all people with the same password will have the same encoded md5 string. With crypt, they will be different.

For data that you care about retrieving, you don't want to know if the two pieces of information are the same, but you don't know that information, and you want only authorized users to be able to retrieve it. Pgcrypto provides several ways of accomplishing this, so for further reading on how to use it you can check the oficial postgresql documentation on https://www.postgresql.org/docs/current/static/pgcrypto.html.

7. Logging

Postgresql provides a wide variety of config parameters for controlling what, when, and where to log.

You can enable session connection/disconnections, long running queries, temp file sizes and so on. This can help you get a better knowledge of your workload in order to identify odd behaviours. You can get all the options for logging on the following link https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html

For more detailed information on your workload, you can enable the pg_stat_statements module, that provides a means for tracking execution statistics of all SQL statements executed by a server. There are some security tools that can ingest the data from this table and will generate an sql whitelist, in order to help you identify queries not following the expected patterns.

For more information https://www.postgresql.org/docs/9.6/static/pgstatstatements.html.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

8. Auditing

The PostgreSQL Audit Extension (pgAudit) provides detailed session and/or object audit logging via the standard PostgreSQL logging facility.

Basic statement logging can be provided by the standard logging facility with log_statement = all. This is acceptable for monitoring and other usages but does not provide the level of detail generally required for an audit. It is not enough to have a list of all the operations performed against the database. It must also be possible to find particular statements that are of interest to an auditor. The standard logging facility shows what the user requested, while pgAudit focuses on the details of what happened while the database was satisfying the request.

9. Patching

Check PostgreSQL's security information page regularly and frequently for critical security updates and patches.

Keep in mind that OS or libraries security bugs can also lead to a database leak, so ensure you keeping the patching for these up to date.

ClusterControl provides an operational report that gives you this information and will execute the patches and upgrades for you.

10. Know Your Workload (pg_stats_statement)

In addition to the SQL-standard privilege system available through GRANT, tables can have row security policies that restrict, on a per-user basis, which rows can be returned by normal queries or inserted, updated, or deleted by data modification commands. This feature is also known as Row-Level Security.

When row security is enabled on a table all normal access to the table for selecting rows or modifying rows must be allowed by a row security policy.

Here is a simple example how to create a policy on the account relation to allow only members of the managers role to access rows, and only rows of their accounts:

CREATE TABLE accounts (manager text, company text, contact_email text);
ALTER TABLE accounts ENABLE ROW LEVEL SECURITY;
CREATE POLICY account_managers ON accounts TO managers USING (manager = current_user);

You can get more information on this feature on the oficial postgresql documentation https://www.postgresql.org/docs/9.6/static/ddl-rowsecurity.html

If you would like to learn more, here are some resources that can help you to better strengthen your database security…

Conclusion

If you follow the tips above your server will be safer, but this does not mean that it will be unbreakable.

For your own security we recommend that you use a security test tool like Nessus, to know what your main vulnerabilities are and try to solve them.

You can also monitor your database with ClusterControl. With this you can see in real time what's happening inside your database and analyze it.

PostgreSQL Privileges and Security - Locking Down the Public Schema

$
0
0

Introduction

In a previous article we introduced the basics of understanding PostgreSQL schemas, the mechanics of creation and deletion, and reviewed several use cases. This article will extend upon those basics and explore managing privileges related to schemas.

More Terminology Overloading

But there is one preliminary matter requiring clarification. Recall that in the previous article, we dwelt on a possible point of confusion related to overloading of the term “schema”. The specialized meaning of that term in the context of PostgreSQL databases is distinct from how it is generally used in relational database management systems. We have another similar possible terminology kerfuffle for the present topic related to the word “public”.

Upon initial database creation, the newly created Postgresql database includes a pre-defined schema named “public”. It is a schema like any other, but the same word is also used as a keyword that denotes “all users” in contexts where otherwise an actual role name might be used, such as ... wait for it ... schema privilege management. The significance and two distinct uses will be clarified in examples below.

Querying Schema Privileges

Before making this concrete with example code to grant and revoke schema privileges, we need to review how to examine schema privileges. Using the psql command line interface, we list the schemas and associated privileges with the \dn+ command. For a newly-created sampledb database we see this entry for the public schema:

sampledb=# \dn+ 
                          List of schemas
  Name  |  Owner   |  Access privileges   |      Description      
--------+----------+----------------------+------------------------
 public | postgres | postgres=UC/postgres+| standard public schema
        |          | =UC/postgres         |
(1 row)

The first two and the fourth columns are pretty straightforward: as mentioned previously showing the default-created schema named “public”, described as “standard public schema”, and owned by the role “postgres”. (The schema ownership, unless specified otherwise, is set to the role which creates the schema.) That third column listing the access privileges is of interest here. The format of the privilege information provides three items: the privilege grantee, the privileges, and privilege grantor in the format “grantee=privileges/grantor” that is, to the left of the equality sign is the role receiving the privilege(s), immediately to the right of the equality sign is a group of letters specifying the particular privilege(s), and lastly following the slash the role which granted to privilege(s). There may be multiple such privilege information specifications, listed separated by a plus sign since privileges are additive.

For schemas, there are two possible privileges which may be granted separately: U for “USAGE” and C for “CREATE”. The former is required for a role to have the ability to lookup database objects such as tables and views contained in the schema; the latter privilege allows for a role to create database objects in the schema. There are other letters for other privileges relating to different types of database objects, but for schemas, only U and C apply.

Thus to interpret the privilege listing above, the first specification tells us that the postgres user was granted the update and create privileges by itself on the public schema.

Notice that for the second specification above, an empty string appears to the left of the equal sign. This is how privileges granted to all users, by means of the PUBLIC key word mentioned earlier, is denoted.

This latter specification of granting usage and create privileges on the public schema to all users is viewed by some as possibly contrary to general security principles best practices, where one might prefer to start with access restricted by default, requiring the database administrator to explicitly grant appropriate and minimally necessary access privileges. These liberal privileges on the public schema are purposely configured in the system as a convenience and for legacy compatibility.

Note also that except for the permissive privilege settings, the only other thing special about the public schema is that it also listed in the search_path, as we discussed in the previous article. This is similarly for convenience: The search_path configuration and liberal privileges together result in a new database being usable as if there was no such concept as schemas.

Historical Background on the Public Schema

This compatibility concern originates from about fifteen years ago (prior to PostgreSQLversion 7.3, cf. version 7.3 release notes) when the schema feature was not part of PostgreSQL. Configuration of the public schema with liberal privileges and the search_path presence when schemas were introduced in version 7.3 allowed for compatibility of older applications, which are not schema-aware, to function unmodified with the upgraded database feature.

Otherwise there is nothing else particularly special about the public schema: some DBA’s delete it if their use case presents no requirement for it; others lock it down by revoking the default privileges.

Show Me the Code - Revoking Privileges

Let’s do some code to illustrate and expand on what we have discussed so far.

Schema privileges are managed with the GRANT and REVOKE commands to respectively add and withdraw privileges. We’ll try some specific examples for locking down the public schema, but the general syntax is:

REVOKE [ GRANT OPTION FOR ]
    { { CREATE | USAGE } [, ...] | ALL [ PRIVILEGES ] }
    ON SCHEMA schema_name [, ...]
    FROM { [ GROUP ] role_name | PUBLIC } [, ...]
    [ CASCADE | RESTRICT ]

So, as an initial lock down example, let’s remove the create privilege from the public schema. Note that in these examples the lowercase word “public” refers to the schema and could be replaced by any other valid schema name that might exist in the database. The uppercase “PUBLIC” is the special keyword that implies “all users” and could instead be replaced with a specific role name or comma-separated list of role names for more fine-grained access control.

sampledb=# REVOKE CREATE ON SCHEMA public FROM PUBLIC;
REVOKE
sampledb=# \dn+
                          List of schemas
  Name  |  Owner   |  Access privileges   |      Description       
--------+----------+----------------------+------------------------
 public | postgres | postgres=UC/postgres+| standard public schema
        |          | =U/postgres          | 
(1 row)

The only difference in this listing of schema privileges from the first is the absence of the “C” in the second privilege specification, verifying our command was effective: users other than the postgres user may no longer create tables, views, or other objects in the public schema.

Note that the above command revoking create privileges from the public schema is the recommended mitigation for a recently published vulnerability, CVE-2018-1058, which arises from the default privilege setting on the public schema.

A further level of lock down could entail denying lookup access to the schema entirely by removing the usage privilege:

sampledb=# REVOKE USAGE ON SCHEMA public FROM PUBLIC;
REVOKE
sampledb=# \dn+
                          List of schemas
  Name  |  Owner   |  Access privileges   |      Description       
--------+----------+----------------------+------------------------
 public | postgres | postgres=UC/postgres | standard public schema
(1 row)

Since all available schema privileges for non-owner users have been revoked, the entire second privilege specification disappears in the listing above.

What we did with two separate commands could have been succinctly accomplished with a single command specifying all privileges as:

sampledb=# REVOKE ALL PRIVILEGES ON SCHEMA public FROM PUBLIC;
REVOKE

Additionally, it is also possible to revoke privileges from the schema owner:

sampledb=# REVOKE ALL PRIVILEGES ON SCHEMA public FROM postgres;
REVOKE
sampledb=# \dn+
                        List of schemas
  Name  |  Owner   | Access privileges |      Description       
--------+----------+-------------------+------------------------
 public | postgres |                   | standard public schema
(1 row)

but that does not really accomplish anything practical, as the schema owner retains full privileges to owned schemas regardless of explicit assignment simply by virtue of ownership.

The liberal privilege assignment for the public schema is a special artifact associated with initial database creation. Subsequently-created schemas in an existing database do conform with the best practice of starting without assigned privileges. For example, examining schema privileges after creating a new schema named “private” shows the new schema has no privileges:

sampledb=# create schema private;
CREATE SCHEMA
sampledb=# \dn+
                          List of schemas
  Name   |  Owner   |  Access privileges   |      Description       
---------+----------+----------------------+------------------------
 private | postgres |                      | 
 public  | postgres |                      | standard public schema
(2 rows)
ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Show Me the Code - Granting Privileges

The general form of the command to add privileges is:

GRANT { { CREATE | USAGE } [, ...] | ALL [ PRIVILEGES ] }
    ON SCHEMA schema_name [, ...]
    TO role_specification [, ...] [ WITH GRANT OPTION ]
where role_specification can be:
  [ GROUP ] role_name
  | PUBLIC
  | CURRENT_USER
  | SESSION_USER

Using this command we can, for example, allow all roles to lookup database objects in the private schema by adding the usage privilege with

sampledb=# GRANT USAGE ON SCHEMA private TO PUBLIC;
GRANT
sampledb=# \dn+
                          List of schemas
  Name   |  Owner   |  Access privileges   |      Description       
---------+----------+----------------------+------------------------
 private | postgres | postgres=UC/postgres+| 
         |          | =U/postgres          | 
 public  | postgres |                      | standard public schema
(2 rows)

Note how the UC privileges appear for the postgres owner as the first specification, now that we have assigned other-than-default privileges to the schema. The second specification, =U/postgres, corresponds to the GRANT command we just invoked as user postgres granting usage privilege to all users (where, recall, the empty string left of the equal sign implies “all users”).

A specific role, named “user1” for example, can be granted both create and usage privileges to the private schema with:

sampledb=# GRANT ALL PRIVILEGES ON SCHEMA private TO user1;
GRANT
sampledb=# \dn+
                          List of schemas
  Name   |  Owner   |  Access privileges   |      Description       
---------+----------+----------------------+------------------------
 private | postgres | postgres=UC/postgres+| 
         |          | =U/postgres         +| 
         |          | user1=UC/postgres    | 
 public  | postgres |                      | standard public schema
(2 rows)

We have not yet mentioned the “WITH GRANT OPTION” clause of the general command form. Just as it sounds, this clause permits a granted role the power to itself grant the specified privilege to other users, and it is denoted in the privilege listing by asterisks appended to the specific privilege:

sampledb=# GRANT ALL PRIVILEGES ON SCHEMA private TO user1 WITH GRANT OPTION;
GRANT
sampledb=# \dn+
                          List of schemas
  Name   |  Owner   |  Access privileges   |      Description       
---------+----------+----------------------+------------------------
 private | postgres | postgres=UC/postgres+| 
         |          | =U/postgres         +| 
         |          | user1=U*C*/postgres  | 
 public  | postgres |                      | standard public schema
(2 rows)

Conclusion

This wraps up the topic for today. As a final note, though, remember that we have discussed only schema access privileges. While the USAGE privilege allows lookup of database objects in a schema, to actually access the objects for specific operations, such as reading, writing, execution, and etc., the role must also have appropriate privileges for those operations on those specific database objects.

Setting up HTTPS on the ClusterControl Server

$
0
0

As a platform that manages all of your databases, ClusterControl maintains the communication with the backend servers, it sends commands and collect metrics. In order to avoid unauthorized access, it is critical that the communication between your browser and a ClusterControl UI is encrypted. In this blog post we will take a look at how ClusterControl uses HTTPS to improve security.

By default, ClusterControl is configured with HTTPS enabled when you deployed it using the deployment script. All you need to do is to point your browser to: https://cc.node.hostname/clustercontrol and you can enjoy secured connection, as shown on the screenshot below.

We will go through this configuration in details. If you do not have HTTPS configured for ClusterControl, this blog will show you how to change your Apache config to enable secure connections.

Apache configuration - Debian/Ubuntu

When Apache is deployed by ClusterControl, a file: /etc/apache2/sites-enabled/001-s9s-ssl.conf is deployed. Below is the content of that file stripped from any comments:

root@vagrant:~# cat /etc/apache2/sites-enabled/001-s9s-ssl.conf | perl -pe 's/\s*\#.*//' | sed '/^$/d'<IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        ServerName cc.severalnines.local
        ServerAdmin webmaster@localhost
        DocumentRoot /var/www/html
        RewriteEngine On
        RewriteRule ^/clustercontrol/ssh/term$ /clustercontrol/ssh/term/ [R=301]
        RewriteRule ^/clustercontrol/ssh/term/ws/(.*)$ ws://127.0.0.1:9511/ws/$1 [P,L]
        RewriteRule ^/clustercontrol/ssh/term/(.*)$ http://127.0.0.1:9511/$1 [P]
        <Directory />
            Options +FollowSymLinks
            AllowOverride All
        </Directory>
        <Directory /var/www/html>
            Options +Indexes +FollowSymLinks +MultiViews
            AllowOverride All
            Require all granted
        </Directory>
        SSLEngine on
SSLCertificateFile /etc/ssl/certs/s9server.crt
SSLCertificateKeyFile /etc/ssl/private/s9server.key
        <FilesMatch "\.(cgi|shtml|phtml|php)$">
                SSLOptions +StdEnvVars
        </FilesMatch>
        <Directory /usr/lib/cgi-bin>
                SSLOptions +StdEnvVars
        </Directory>
        BrowserMatch "MSIE [2-6]" \
                nokeepalive ssl-unclean-shutdown \
                downgrade-1.0 force-response-1.0
        BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
    </VirtualHost>
</IfModule>

Important and not standard bits are RewriteRule directives which are used for web SSH in the UI. Otherwise, it’s a pretty standard VirtualHost definition. Please mind that you will have to create SSL keys if you would attempt to recreate this configuration by hand. ClusterControl, when being installed, creates them for you.

Also in /etc/apache2/ports.conf a directive for Apache to listen on port 443 has been added:

<IfModule ssl_module>
        Listen 443
</IfModule>

<IfModule mod_gnutls.c>
        Listen 443
</IfModule>

Again, pretty much typical setup.

Apache configuration - Red Hat, Centos

Configuration looks almost the same, it’s just located in a different place:

[root@localhost ~]# cat /etc/httpd/conf.d/ssl.conf | perl -pe 's/\s*\#.*//' | sed '/^$/d'<IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        ServerName cc.severalnines.local
        ServerAdmin webmaster@localhost
        DocumentRoot /var/www/html
        RewriteEngine On
        RewriteRule ^/clustercontrol/ssh/term$ /clustercontrol/ssh/term/ [R=301]
        RewriteRule ^/clustercontrol/ssh/term/ws/(.*)$ ws://127.0.0.1:9511/ws/$1 [P,L]
        RewriteRule ^/clustercontrol/ssh/term/(.*)$ http://127.0.0.1:9511/$1 [P]
        <Directory />
            Options +FollowSymLinks
            AllowOverride All
        </Directory>
        <Directory /var/www/html>
            Options +Indexes +FollowSymLinks +MultiViews
            AllowOverride All
            Require all granted
        </Directory>
        SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/s9server.crt
SSLCertificateKeyFile /etc/pki/tls/private/s9server.key
        <FilesMatch "\.(cgi|shtml|phtml|php)$">
                SSLOptions +StdEnvVars
        </FilesMatch>
        <Directory /usr/lib/cgi-bin>
                SSLOptions +StdEnvVars
        </Directory>
        BrowserMatch "MSIE [2-6]" \
                nokeepalive ssl-unclean-shutdown \
                downgrade-1.0 force-response-1.0
        BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
    </VirtualHost>
</IfModule>

Again, RewriteRule directives are used to enable web SSH console. In addition to this, ClusterControl adds the following lines at the top of /etc/httpd/conf/httpd.conf file:

ServerName 127.0.0.1
Listen 443

This is all that’s needed to have ClusterControl running using HTTPS.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Troubleshooting

In case of issues, here are the steps you can use to identify some of the problems. First of all, if you cannot access ClusterControl over HTTPS, please make sure that Apache listens on port 443. You can check it by using netstat. Below are results for Centos 7 and Ubuntu 16.04:

[root@localhost ~]# netstat -lnp | grep 443
tcp6       0      0 :::443                  :::*                    LISTEN      977/httpd

root@vagrant:~# netstat -lnp | grep 443
tcp6       0      0 :::443                  :::*                    LISTEN      1389/apache2

If Apache does not listen on that port, please review the configuration and check if there’s a “Listen 443” directive added to Apache’s configuration. Please also check if ssl module is enabled. You can check it by running:

root@vagrant:~# apachectl -M | grep ssl
 ssl_module (shared)

If you have “Listen” directive used in “IfModule” section, like below:

<IfModule ssl_module>
        Listen 443
</IfModule>

You have to make sure that it’s in the configuration after modules have been loaded. For example, in Ubuntu 16.04 it’ll be those lines in /etc/apache2/apache2.conf:

# Include module configuration:
IncludeOptional mods-enabled/*.load
IncludeOptional mods-enabled/*.conf

On Centos 7 it’ll be /etc/httpd/conf/httpd.conf file and line:

# Example:
# LoadModule foo_module modules/mod_foo.so
#
Include conf.modules.d/*.conf

Normally, ClusterControl handles that correctly but if you are adding HTTPS support manually, you need to keep this in mind.

As always, please refer to Apache logs for further investigation - if HTTPS is up but for some reason you cannot reach the UI, it is possible that more clues could be found in the logs.

Integrating PostgreSQL with Authentication Systems

$
0
0

PostgreSQL is one of the most secure databases in the world. Database security plays an imperative role in the real-world mission critical environments. It is important to ensure databases and the data is always secured and is not subjected to un-authorized access thereby compromising the data security. Whilst PostgreSQL provides various mechanisms and methods for users to access the database in a secured manner, it can also be integrated with various external authentication systems to ensure enterprise standard database security requirements are met.

Apart from providing secured authentication mechanisms via SSL, MD5, pgpass and pg_ident etc., PostgreSQL can be integrated with various other popular enterprise grade external authentication systems. My focus in this blog will be on LDAP, Kerberos and RADIUS with SSL and pg_ident.

LDAP

LDAP refers to Lightweight Directory Access Protocol which is a popularly used centralized authentication system. It is a datastore which stores the user credentials and various other user related details like Names, Domains, Business Units etc. in the form of a hierarchy in a table format. The end users connecting to the target systems (E.g., a database) must first connect to LDAP server to get through a successful authentication. LDAP is one of the popular authentication systems currently used across organizations demanding high security standards.

LDAP + PostgreSQL

PostgreSQL can be integrated with LDAP. In my customer consulting experience, this is considered one of the key capabilities of PostgreSQL. As the authentication of the username and password takes place at the LDAP server, to ensure users can connect to the database via LDAP, the user account must exist in the database. In other words, this means the users when attempting to connect to PostgreSQL are routed to the LDAP server first and then to the Postgres database upon successful authentication. Configuration can be made in the pg_hba.conf file to ensure connections are routed to the LDAP server. Below is a sample pg_hba.conf entry -

host    all    pguser   0.0.0.0/0    ldap ldapserver=ldapserver.example.com ldapprefix="cn=" ldapsuffix=", dc=example, dc=com"

Below is an example of an LDAP entry in pg_hba.conf:

host    all    pguser   0.0.0.0/0    ldap ldapserver=ldapserver.example.com ldapprefix="cn=" ldapsuffix=", ou=finance, dc=example, dc=com"

When using non-default ldap port and TLS:

ldap ldapserver=ldapserver.example.com ldaptls=1 ldapport=5128 ldapprefix="uid=" ldapsuffix=",ou=finance,dc=apix,dc=com"

Understanding the above LDAP entry

  • LDAP uses various attributes and terminologies to store / search for a user entry in its datastore. Also, as mentioned above, user entries are stored in hierarchy.
  • The above pg_hba.conf ldap entries consists of attributes called CN (Common Name), OU (Organization Unit) and DC (Domain Component) which, are termed as Relative Distinguished Names (RDN), these sequence of RDN together become something called DN (Distinguished Name). DN is the LDAP object based which, the search is performed in the LDAP data store.
  • LDAP attribute values like CN, DC, OU etc. are defined in LDAP’s Object Classes, which can be provided by the systems experts who built the LDAP environment.

Will that make LDAP secured enough?

Maybe not. Passwords communicated over the network in an LDAP environment are not encrypted, which can be a security risk as the encrypted passwords can be hacked. There are options to make the credentials communication more secure.

  1. Consider configuring LDAP on TLS (Transport Layer Security)
  2. LDAP can be configured with SSL which is another option

Tips to achieve LDAP integration with PostgreSQL

(for Linux based systems)

  • Install appropriate openLDAP modules based on operating system version
  • Ensure PostgreSQL software is installed with LDAP libraries
  • Ensure LDAP is integrated well with Active Directory
  • Familiarize with any existing BUGs in the openLDAP modules being used. This can be catastrophic and can compromise security standards.
  • Windows Active Directory can also be integrated with LDAP
  • Consider configuring LDAP with SSL which is more secure. Install appropriate openSSL modules and be aware of BUGs like heart-bleed which can expose the credentials transmitted over the network.

Kerberos

Kerberos is an industry-standard centralized authentication system popularly used in organizations and provides encryption-based authentication mechanism. The passwords are authenticated by a third-party authentication server termed as KDC (Key Distribution Centre). The passwords can be encrypted based on various algorithms and can only be decrypted with the help of shared private keys.  This also means, passwords communicated over the network are encrypted.

PostgreSQL + Kerberos

PostgreSQL supports GSSAPI based authentication with Kerberos. The users attempting to connect to the Postgres database, will be routed to KDC server for authentication. This authentication between clients and KDC database is performed based on shared private keys and upon successful authentication, the clients would now hold Kerberos based credentials. The same credentials are subjected to validation between the Postgres server and the KDC which will be done based on the keytab file generated by Kerberos. This keytab file must exist on the database server with appropriate permissions to the user owning the Postgres process.

The Kerberos configuration and connection process -

  • Kerberos based user accounts must generate a ticket ( a connection request ) using “kinit” command.

  • A keytab file must be generated using “kadmin” command for a fully qualified Kerberos based user account (principal) and then Postgres would use the same keytab file to validate the credentials. Principals can be encrypted and added to existing keytab file using “ktadd” command. Kerberos encryption supports various industry standard encryption algorithms.

    The generated keytab file must be copied across to the Postgres server, it must be readable by the Postgres process. The below postgresql.conf parameter must be configured:

    krb_server_keyfile = '/database/postgres/keytab.example.com'

    If you are particular about case-sensitivity, then, use the below parameter

    krb_caseins_users which is by default “off”  (case sensitive)
  • An entry must be made in the pg_hba.conf to ensure connections are routed to KDC server

    Example pg_hba.conf entry

    # TYPE DATABASE       USER    CIDR-ADDRESS            METHOD
    host     all                     all         192.168.1.6/32            gss include_realm=1 krb_realm=EXAMPLE.COM

    Example pg_hba.conf entry with map entry

    # TYPE DATABASE       USER    CIDR-ADDRESS            METHOD
    host     all                     all         192.168.1.6/32            gss include_realm=1 krb_realm=EXAMPLE.COM map=krb
  • A user account attempting to connect must be added to the KDC database which is termed as principal and the same user account or a mapping user account must exist in the database as well

    Below is an example of a Kerberos principal

    pguser@example.com

    pguser is the username and the “example.com” is the realm name configured in the Kerberos config (/etc/krb5.conf) in the KDC server.

    In the kerberos world, principals are in an email like format (username@realmname) and the database users cannot be created in the same format. This makes DBAs think of creating a mapping of database user names instead and ensure principals connect with mapped names using pg_ident.conf.

    Below is an example of a map name entry in pg_ident.conf

    # MAPNAME           SYSTEM-USERNAME               GP-USERNAME
       mapuser               /^(.*)EXAMPLE\.DOMAIN$      admin

Will that make Kerberos Secured enough ?

Maybe not. User credentials communicated over the network can be exposed, hacked. Though Kerberos encrypts the principals, they can be stolen, hacked. This brings in the need for implementing network layer security. Yes, SSL or TLS is the way to go. Kerberos authentication system can be integrated with SSL or TLS. TLS is the successor of SSL. It is recommended to have Kerberos configured with SSL or TLS so that the communication over the network is secured.

TIPS

  • Ensure krb* libraries are installed
  • OpenSSL libraries must be installed to configure SSL
  • Ensure Postgres is installed with the following options
    ./configure --with-gssapi --with-krb-srvnam --with-openssl
Download the Whitepaper Today
 
PostgreSQL Management & Automation with ClusterControl
Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL

RADIUS

RADIUS is a remote authentication service network protocol which provides centralized

Authentication, Authorization and Accounting (AAA). Username / password pairs are authenticated at the RADIUS server. This way of centralized authentication is much straight orward and simpler compared to other authentication systems like LDAP and Kerberos which involves a bit of complexity.

RADIUS + PostgreSQL

PostgreSQL can be integrated with RADIUS authentication mechanism. Accounting is not supported in Postgres yet. This requires database user accounts to exist in the database. Connections to the database are authorized based on the shared secret termed as “radiussecret”.

An entry in the pg_hba.conf config is essential to route the connections to radius server for authentication.

Example pg_hba.conf entry

hostssl             all        all        0.0.0.0/0         radius  radiusserver=127.0.0.1 radiussecret=secretr radiusport=3128

To understand the above entry -

“radiusserver” is the host IP address of the RADIUS server where users are routed for authentication. This parameter is configured in the /etc/radiusd.conf in the RADIUS server.

“radiussecret” value is extracted from clients.conf. This is the secret code which uniquely identifies the radius client connection.

“radiusport” can be found in /etc/radiusd.conf file. This is port on which radius connections will be listening.

Importance of SSL

SSL (Secure Socket Layer) plays an imperative role with external authentication systems in place. It is highly recommended to configure SSL with an external authentication system as there will be communication of sensitive information between clients and the servers over the network and SSL can further tighten the security.

Performance Impact of using external authentication systems

An effective and efficient security system comes at the expense of performance. As the clients/users attempting to connect to the database are routed to authentication systems to establish connection, there can be performance degradation. There are ways to overcome performance hurdles.

  • With external authentication mechanism in place, there could be a delay when establishing a connection to the database. This could be a real concern when there are huge number of connections being established to the database.
  • Developerss need to ensure that anunnecessary high number of connections are not made to the database. Multiple application requests being served via one connection would be advantageous.
  • Also, how long each request is taking at the database end plays an important role. If the request takes longer to complete, then subsequent requests would queue up. Performance tuning of the processes and meticulously architecting the infrastructure will be key!
  • Database and infrastructure must be efficiently architected and adequately capacitated to ensure good performance.
  • When doing performance benchmarking, ensure SSL is enabled and the average connection establishment time must then be evaluated.

Integrating external authentication systems with ClusterControl - PostgreSQL

PostgreSQL instances can be built and configured automatically via ClusterControl GUI. Integrating external authentication systems with PostgreSQL Instances deployed via ClusterControl is pretty much similar compared to integration with traditional PostgreSQL instances and in-fact is a bit simpler. Below is an overview of the same -

  • ClusterControl installs PostgreSQL libraries enabled with LDAP, KRB, GSSAPI and OpenSSL capabilities
  • Integration with external authentication systems requires various parameter configuration changes on the postgresql database server which can be done using ClusterControl GUI.

PostgreSQL Audit Logging Best Practices

$
0
0

In every IT system where important business tasks take place, it is important to have an explicit set of policies and practices, and to make sure those are respected and followed.

Introduction to Auditing

An Information Technology system audit is the examination of the policies, processes, procedures, and practices of an organization regarding IT infrastructure against a certain set of objectives. An IT audit may be of two generic types:

  • Checking against a set of standards on a limited subset of data
  • Checking the whole system

An IT audit may cover certain critical system parts, such as the ones related to financial data in order to support a specific set of regulations (e.g. SOX), or the entire security infrastructure against regulations such as the new EU GDPR regulation which addresses the need for protecting privacy and sets the guidelines for personal data management. The SOX example is of the former type described above whereas GDPR is of the latter.

The Audit Lifecycle

Planning

The scope of an audit is dependent on the audit objective. The scope may cover a special application identified by a specific business activity, such as a financial activity, or the whole IT infrastructure covering system security, data security and so forth. The scope must be correctly identified beforehand as an early step in the initial planning phase. The organization is supposed to provide to the auditor all the necessary background information to help with planning the audit. This may be the functional/technical specifications, system architecture diagrams or any other information requested.

Control Objectives

Based on the scope, the auditor forms a set of control objectives to be tested by the audit. Those control objectives are implemented via management practices that are supposed to be in place in order to achieve control to the extent described by the scope. The control objectives are associated with test plans and those together constitute the audit program. Based on the audit program the organization under audit allocates resources to facilitate the auditor.

Findings

The auditor tries to get evidence that all control objectives are met. If for some control objective there is no such evidence, first the auditor tries to see if there is some alternative way that the company handles the specific control objective, and in case such a way exists then this control objective is marked as compensating and the auditor considers that the objective is met. If however there is no evidence at all that an objective is met, then this is marked as a finding. Each finding consists of the condition, criteria, cause, effect and recommendation. The IT manager must be in close contact with the auditor in order to be informed of all potential findings and make sure that all requested information are shared between the management and the auditor in order to assure that the control objective is met (and thus avoid the finding).

The Assessment Report

At the end of the audit process the auditor will write an assessment report as a summary covering all important parts of the audit, including any potential findings followed by a statement on whether the objective is adequately addressed and recommendations for eliminating the impact of the findings.

What is Audit Logging and Why Should You Do It?

The auditor wants to have full access to the changes on software, data and the security system. He/she not only wants to be able to track down any change to the business data, but also track changes to the organizational chart, the security policy, the definition of roles/groups and changes to role/group membership. The most common way to perform an audit is via logging. Although it was possible in the past to pass an IT audit without log files, today it is the preferred (if not the only) way.

Typically the average IT system comprises of at least two layers:

  • Database
  • Application (possibly on top of an application server)

The application maintains its own logs covering user access and actions, and the database and possibly the application server systems maintain their own logs. Clean, readily usable information in log files which has real business value from the auditor perspective is called an audit trail. Audit trails differ from ordinary log files (sometimes called native logs) in that:

  • Log files are dispensable
  • Audit trails should be kept for longer periods
  • Log files add overhead to the system’s resources
  • Log files’ purpose is to help the system admin
  • Audit trails’ purpose is to help the auditor

We summarise the above in the following table:

Log typeApp/SystemAudit Trail friendly
App logsAppYes
App server logsSystemNo
Database logsSystemNo

App logs may be easily tailored to be used as audit trails. System logs not so easily because:

  • They are limited in their format by the system software
  • They act globally on the whole system
  • They don’t have direct knowledge about specific business context
  • They usually require additional software for later offline parsing/processing in order to produce usable audit-friendly audit trails.

However on the other hand App logs place an additional software layer on top of the actual data, thus:

  • Making the audit system more vulnerable to application bugs/misconfiguration
  • Creating a potential hole in the logging process if someone tries to access data directly on the database bypassing the app logging system, such as a privileged user or a DBA
  • Making the audit system more complex and harder to manage and maintain in case we have many applications or many software teams.

So, ideally we would be looking for the best of the two: Having usable audit trails with the greatest coverage on the whole system including database layer, and configurable in one place, so that the logging itself can be easily audited by means of other (system) logs.

Audit Logging with PostgreSQL

The options we have in PostgreSQL regarding audit logging are the following:

Exhaustive logging at least for standard usage in OLTP or OLAP workloads should be avoided because:

  • Produces huge files, increases load
  • Does not have inner knowledge of tables being accessed or modified, just prints the statement which might be a DO block with a cryptic concatenated statement
  • Needs additional software/resources for offline parsing and processing (in order to produce the audit trails) which in turn must be included in the scope of the audit, to be considered trustworthy

In the rest of this article we will try the tools provided by the community. Let’s suppose that we have this simple table that we want to audit:

myshop=# \d orders
                                       Table "public.orders"
   Column   |           Type           | Collation | Nullable |              Default               
------------+--------------------------+-----------+----------+------------------------------------
 id         | integer                  |           | not null | nextval('orders_id_seq'::regclass)
 customerid | integer                  |           | not null |
 customer   | text                     |           | not null |
 xtime      | timestamp with time zone   |           | not null | now()
 productid  | integer                  |           | not null |
 product    | text                     |           | not null |
 quantity   | integer                  |           | not null |
 unit_price | double precision         |           | not null |
 cur        | character varying(20)    |           | not null | 'EUR'::character varying
Indexes:
    "orders_pkey" PRIMARY KEY, btree (id)

audit-trigger 91plus

The docs about using the trigger can be found here: https://wiki.postgresql.org/wiki/Audit_trigger_91plus. First we download and install the provided DDL (functions, schema):

$ wget https://raw.githubusercontent.com/2ndQuadrant/audit-trigger/master/audit.sql
$ psql myshop
psql (10.3 (Debian 10.3-1.pgdg80+1))
Type "help" for help.
myshop=# \i audit.sql

Then we define the triggers for our table orders using the basic usage:

myshop=# SELECT audit.audit_table('orders');

This will create two triggers on table orders: a insert_update_delere row trigger and a truncate statement trigger. Now let’s see what the trigger does:

myshop=# insert into orders (customer,customerid,product,productid,unit_price,quantity) VALUES('magicbattler',1,'some fn skin 2',2,5,2);      
INSERT 0 1
myshop=# update orders set quantity=3 where id=2;
UPDATE 1
myshop=# delete from orders  where id=2;
DELETE 1
myshop=# select table_name, action, session_user_name, action_tstamp_clk, row_data, changed_fields from audit.logged_actions;
-[ RECORD 1 ]-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
table_name        | orders
action            | I
session_user_name | postgres
action_tstamp_clk | 2018-05-20 00:15:10.887268+03
row_data          | "id"=>"2", "cur"=>"EUR", "xtime"=>"2018-05-20 00:15:10.883801+03", "product"=>"some fn skin 2", "customer"=>"magicbattler", "quantity"=>"2", "productid"=>"2", "customerid"=>"1", "unit_price"=>"5"
changed_fields    |
-[ RECORD 2 ]-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
table_name        | orders
action            | U
session_user_name | postgres
action_tstamp_clk | 2018-05-20 00:16:12.829065+03
row_data          | "id"=>"2", "cur"=>"EUR", "xtime"=>"2018-05-20 00:15:10.883801+03", "product"=>"some fn skin 2", "customer"=>"magicbattler", "quantity"=>"2", "productid"=>"2", "customerid"=>"1", "unit_price"=>"5"
changed_fields    | "quantity"=>"3"
-[ RECORD 3 ]-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
table_name        | orders
action            | D
session_user_name | postgres
action_tstamp_clk | 2018-05-20 00:16:24.944117+03
row_data          | "id"=>"2", "cur"=>"EUR", "xtime"=>"2018-05-20 00:15:10.883801+03", "product"=>"some fn skin 2", "customer"=>"magicbattler", "quantity"=>"3", "productid"=>"2", "customerid"=>"1", "unit_price"=>"5"
changed_fields    |

Note the changed_fields value on the Update (RECORD 2). There are more advanced uses of the audit trigger, like excluding columns, or using the WHEN clause as shown in the doc. The audit trigger sure seems to do the job of creating useful audit trails inside the audit.logged_actions table. However there are some caveats:

  • No SELECTs (triggers do not fire on SELECTs) or DDL are tracked
  • Changes by table owners and super users can be easily tampered
  • Best practices must be followed regarding the app user(s) and app schema and tables owners
Download the Whitepaper Today
 
PostgreSQL Management & Automation with ClusterControl
Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL

Pgaudit

Pgaudit is the newest addition to PostgreSQL as far as auditing is concerned. Pgaudit must be installed as an extension, as shown in the project’s github page: https://github.com/pgaudit/pgaudit. Pgaudit logs in the standard PostgreSQL log. Pgaudit works by registering itself upon module load and providing hooks for the executorStart, executorCheckPerms, processUtility and object_access. Therefore pgaudit (in contrast to trigger-based solutions such as audit-trigger discussed in the previous paragraphs) supports READs (SELECT, COPY). Generally with pgaudit we can have two modes of operation or use them combined:

  • SESSION audit logging
  • OBJECT audit logging

Session audit logging supports most DML, DDL, privilege and misc commands via classes:

  • READ (select, copy from)
  • WRITE (insert, update, delete, truncate, copy to)
  • FUNCTION (function calls and DO blocks)
  • ROLE (grant, revoke, create/alter/drop role)
  • DDL (all DDL except those in ROLE)
  • MISC (discard, fetch, checkpoint, vacuum)

Metaclass “all” includes all classes. - excludes a class. For instance let us configure Session audit logging for all except MISC, with the following GUC parameters in postgresql.conf:

pgaudit.log_catalog = off
pgaudit.log = 'all, -misc'
pgaudit.log_relation = 'on'
pgaudit.log_parameter = 'on'

By giving the following commands (the same as in the trigger example)

myshop=# insert into orders (customer,customerid,product,productid,unit_price,quantity) VALUES('magicbattler',1,'some fn skin 2',2,5,2);
INSERT 0 1
myshop=# update orders set quantity=3 where id=2;
UPDATE 1
myshop=# delete from orders  where id=2;
DELETE 1
myshop=#

We get the following entries in PostgreSQL log:

% tail -f data/log/postgresql-22.log | grep AUDIT:
[local] [55035] 5b03e693.d6fb 2018-05-22 12:46:37.352 EEST psql postgres@testdb line:7 LOG:  AUDIT: SESSION,5,1,WRITE,INSERT,TABLE,public.orders,"insert into orders (customer,customerid,product,productid,unit_price,quantity) VALUES('magicbattler',1,'some fn skin 2',2,5,2);",<none>
[local] [55035] 5b03e693.d6fb 2018-05-22 12:46:50.120 EEST psql postgres@testdb line:8 LOG:  AUDIT: SESSION,6,1,WRITE,UPDATE,TABLE,public.orders,update orders set quantity=3 where id=2;,<none>
[local] [55035] 5b03e693.d6fb 2018-05-22 12:46:59.888 EEST psql postgres@testdb line:9 LOG:  AUDIT: SESSION,7,1,WRITE,DELETE,TABLE,public.orders,delete from orders  where id=2;,<none>

Note that the text after AUDIT: makes up a perfect audit trail, almost ready to ship to the auditor in spreadsheet-ready csv format. Using session audit logging will give us audit log entries for all operations belonging to the classes defined by pgaudit.log parameter on all tables. However there are cases that we wish only a small subset of the data i.e. only a few tables to be audited. In such cases we may prefer object audit logging which gives us fine grained criteria to selected tables/columns via the PostgreSQL’s privilege system. In order to start using Object audit logging we must first configure the pgaudit.role parameter which defines the master role that pgaudit will use. It makes sense not to give this user any login rights.

CREATE ROLE auditor;
ALTER ROLE auditor WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN NOREPLICATION NOBYPASSRLS CONNECTION LIMIT 0;

The we specify this value for pgaudit.role in postgresql.conf:

pgaudit.log = none # no need for extensive SESSION logging
pgaudit.role = auditor

Pgaudit OBJECT logging will work by finding if user auditor is granted (directly or inherited) the right to execute the specified action performed on the relations/columns used in a statement. So if we need to ignore all tables, but have detailed logging to table orders, this is the way to do it:

grant ALL on orders to auditor ;

By the above grant we enable full SELECT, INSERT, UPDATE and DELETE logging on table orders. Let’s give once again the INSERT, UPDATE, DELETE of the previous examples and watch the postgresql log:

% tail -f data/log/postgresql-22.log | grep AUDIT:
[local] [60683] 5b040125.ed0b 2018-05-22 14:41:41.989 EEST psql postgres@testdb line:7 LOG:  AUDIT: OBJECT,2,1,WRITE,INSERT,TABLE,public.orders,"insert into orders (customer,customerid,product,productid,unit_price,quantity) VALUES('magicbattler',1,'some fn skin 2',2,5,2);",<none>
[local] [60683] 5b040125.ed0b 2018-05-22 14:41:52.269 EEST psql postgres@testdb line:8 LOG:  AUDIT: OBJECT,3,1,WRITE,UPDATE,TABLE,public.orders,update orders set quantity=3 where id=2;,<none>
[local] [60683] 5b040125.ed0b 2018-05-22 14:42:03.148 EEST psql postgres@testdb line:9 LOG:  AUDIT: OBJECT,4,1,WRITE,DELETE,TABLE,public.orders,delete from orders  where id=2;,<none>

We observe that the output is identical to the SESSION logging discussed above with the difference that instead of SESSION as audit type (the string next to AUDIT: ) now we get OBJECT.

One caveat with OBJECT logging is that TRUNCATEs are not logged. We have to resort to SESSION logging for this. But in this case we end up getting all WRITE activity for all tables. There are talks among the hackers involved to make each command a separate class.

Another thing to keep in mind is that in the case of inheritance if we GRANT access to the auditor on some child table, and not the parent, actions on the parent table which translate to actions on rows of the child table will not be logged.

In addition to the above, the IT people in charge for the integrity of the logs must document a strict and well defined procedure which covers the extraction of the audit trail from the PostgreSQL log files. Those logs might be streamed to an external secure syslog server in order to minimize the chances of any interference or tampering.

ClusterControl Release 1.6.2: New Backup Management and Security Features for MySQL & PostgreSQL

$
0
0

We are excited to announce the 1.6.2 release of ClusterControl - the all-inclusive database management system that lets you easily automate and manage highly available open source databases in any environment: on-premise or in the cloud.

ClusterControl 1.6.2 introduces new exciting Backup Management as well as Security & Compliance features for MySQL & PostgreSQL, support for MongoDB v 3.6 … and more!

Release Highlights

Backup Management

  • Continuous Archiving and Point-in-Time Recovery (PITR) for PostgreSQL
  • Rebuild a node from a backup with MySQL Galera clusters to avoid SST

Security & Compliance

  • New, consolidated Security section

Additional Highlights

  • Support for MongoDB v 3.6

View the ClusterControl ChangeLog for all the details!

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

View Release Details and Resources

Release Details

Backup Management

One of the issues with MySQL and PostgreSQL is that there aren’t really any out-of-the-box tools for users to simply (in the GUI) pick up restore-time: certain operations need to be performed to do that, such as finding the full backup, restore it and apply any changes manually that happened after the backup was taken.

ClusterControl provides a single process to restore data to point in time with no extra actions needed.

With the same system, users can verify their backups (in the case of MySQL for instance, ClusterControl will do the installation, set up the cluster, do a restore and, if the backup is sound, make it valid - which, as one can imagine, represents a lot of steps).

With ClusterControl, users can not only go back to a point in time, but also pick up the exact transaction that happened; and, with surgical precision, restore their data before disaster really strikes.

New for PostgreSQL

Continuous Archiving and Point-in-Time Recovery (PITR) for PostgreSQL: ClusterControl automates that process now and enables continuous WAL archiving as well as a PITR with backups.

New for MySQL Galera Cluster

Rebuild a node from a backup with MySQL Galera clusters to avoid SST: ClusterControl reduces the time it takes to recover a node by avoiding streaming a full dataset over the network from another node.

Security & Compliance

The new Security section in ClusterControl lets users easily check which security features they have enabled (or disabled) for their clusters, thus simplifying the process of taking the relevant security measures for their setups.

Additional New Functionalities

View the ClusterControl ChangeLog for all the details!

 

Download ClusterControl today!

Happy Clustering!

Self-Provisioning of User Accounts in PostgreSQL via Unprivileged Anonymous Access

$
0
0

Note from Severalnines: This blog is being published posthumously as Berend Tober passed away on July 16, 2018. We honor his contributions to the PostgreSQL community and wish peace for our friend and guest writer.

Introduction

In previous article we introduced the basics of PostgreSQL triggers and stored functions and provided six example use cases including data validation, change logging, deriving values from inserted data, data hiding with simple updatable views, maintaining summary data in separate tables, and safe invocation of code at elevated privilege. This article builds further on that foundation and presents a technique utilizing a trigger and stored function to facilitate delegating login credential provisioning to limited-privilege (i.e., non-superuser) roles. This feature might be used to reduce administrative workload for high-value systems-administration personnel. Taken to the extreme, we demonstrate anonymous end-user self-provisioning of login credentials, i.e., letting prospective database users provision login credentials on their own by implementing “dynamic SQL” inside a stored function executed at appropriately-scoped privilege level.

Helpful Background Reading

The recent article by Sebastian Insausti on How to Secure your PostgreSQL Database includes some highly relevant tips you should be familiar with, namely, Tips #1 - #5 on Client Authentication Control, Server Configuration, User and Role Management, Super User Management, and Data Encryption. We'll use parts of each tip in this article.

Another recent article by Joshua Otwell on PostgreSQL Privileges & User Management also has a good treatment of host configuration and user privileges that goes into a little more detail on those two topics.

Protecting Network Traffic

The proposed feature involves allowing users to provision database login credentials and while doing so, they will specify their new login name and password over the network. Protection of this network communication is essential and can be achieved by configuring the PostgreSQL server to support and require encrypted connections. Transport layer security is enabled in the postgresql.conf file by the “ssl” setting:

ssl = on

Host-Based Access Control

For the present case, we will add a host-based access configuration line in the pg_hba.conf file that allows anonymous, i.e., trusted, login to the database from some appropriate sub-network for the population of prospective database users literally using the username “anonymous”, and a second configuration line requiring password login for any other login name. Remember that host configurations invoke the first match, so the first line will apply whenever the “anonymous” username is specified, permitting trusted (i.e., no password required) connection, and then subsequently whenever any other username is specified a password will be required. For example, if the sample database “sampledb” is to be used, say, by employees only and internally to corporate facilities, then we may configure trusted access for some non-routable internal subnet with:

# TYPE  DATABASE USER      ADDRESS        METHOD
hostssl sampledb anonymous 192.168.1.0/24 trust
hostssl sampledb all       192.168.1.0/24 md5

If the database is to be made available generally to the public, then we may configure “any address” access:

# TYPE  DATABASE USER       ADDRESS  METHOD
hostssl sampledb anonymous  all      trust
hostssl sampledb all        all      md5

Note the above is potentially dangerous without additional precautions, possibly in the application design or at a firewall device, to rate-limit use of this feature, because you know some script kiddie will automate endless account creation just for the lulz.

Note also we have specified the connection type as “hostssl” which means connections made using TCP/IP succeed only when the connection is made with SSL encryption so as to protect the network traffic from eavesdropping.

Locking Down the Public Schema

Since we are allowing possibly unknown (i.e., untrusted) persons to access the database, we will want to be sure that default accesses are capability limited. One important measure is to revoke the default public schema object creation privilege so as to mitigate a recently-published PostgreSQL vulnerability related to default schema privileges (cf. Locking Down the Public Schema by yours truly).

A Sample Database

We’ll start with an empty sample database for illustration purposes:

create database sampledb;
\connect sampledb

revoke create on schema public from public;
alter default privileges revoke all privileges on tables from public;

We also create the anonymous login role corresponding to the earlier pg_hba.conf setting.

create role anonymous login
    nosuperuser 
    noinherit 
    nocreatedb 
    nocreaterole 
    Noreplication;

And then we do something novel by defining an unconventional view:

create or replace view person as 
 select 
    null::name as login_name,
    null::name as login_pass;

This view references no table and so a select query always returns an empty row:

select * from person;
 login_name | login_pass 
------------+-------------
            | 
(1 row)

One thing this does for us is in a sense to provide documentation or a hint to end users as to what data is required to establish an account. That is, by querying the table, even though the result is an empty row, the result reveals the names of the two data elements.

But even better, the existence of this view allows determination of the datatypes required:

\d person
      View "public.person"
    Column    | Type | Modifiers 
--------------+------+-----------
 login_name   | name | 
 login_pass   | name | 

We will be implementing the credential provisioning functionality with a stored function and trigger, so let’s declare an empty function template and the associated trigger:

create or replace function person_iit()
  returns trigger
  set schema 'public'
  language plpgsql
  security definer
  as '
  begin
  end;
  ';

create trigger person_iit
  instead of insert
  on person
  for each row execute procedure person_iit();

Note that we are following the proposed naming convention from the previous article, using the associated table name suffixed with a short-hand abbreviation denoting attributes of the trigger relationship between the table and the stored function for an INSTEAD OF INSERT trigger (i.e., suffix “iit”). We have also added to the stored function the SCHEMA and SECURITY DEFINER attributes: the former because it is good practice to set the search path that applies for duration of function execution, and the latter to facilitate role creation, which is normally a database superuser authority only but in this case will be delegated to anonymous users.

And lastly we add minimally-sufficient permissions on the view to query and insert:

grant select, insert on table person to anonymous;
Download the Whitepaper Today
 
PostgreSQL Management & Automation with ClusterControl
Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL

Let’s Review

Before implementing the stored function code, let’s review what we have. First there’s the sample database owned by the postgres user:

\l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 sampledb  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
And there’s the user roles, including the database superuser and the newly-created anonymous login roles:
\du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 anonymous | No inheritance                                             | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

And there’s the view we created and a listing of create and read access privileges granted to the anonymous user by the postgres user:

\d
         List of relations
 Schema |  Name  | Type |  Owner   
--------+--------+------+----------
 public | person | view | postgres
(1 row)


\dp
                                Access privileges
 Schema |  Name  | Type |     Access privileges     | Column privileges | Policies 
--------+--------+------+---------------------------+-------------------+----------
 public | person | view | postgres=arwdDxt/postgres+|                   | 
        |        |      | anonymous=ar/postgres     |                   | 
(1 row)

Lastly, the table detail shows the column names and datatypes as well as the associated trigger:

\d person
      View "public.person"
    Column    | Type | Modifiers 
--------------+------+-----------
 login_name   | name | 
 login_pass   | name | 
Triggers:
    person_iit INSTEAD OF INSERT ON person FOR EACH ROW EXECUTE PROCEDURE person_iit()

Dynamic SQL

We are going to employ dynamic SQL, i.e., constructing the final form of a DDL statement at run-time partially from user-entered data, to fill in the trigger function body. Specifically we hard code the outline of the statement to create a new login role and fill in the specific parameters as variables.

The general form of this command is

create role name [ [ with ] option [ ... ] ]

where option can be any of sixteen specific properties. Generally the defaults are appropriate but we’re going to be explicit about several limiting options and use the form

create role name 
  with 
    login 
    inherit 
    nosuperuser 
    nocreatedb 
    nocreaterole 
    password ‘password’;

where we will insert the user-specified the role name and password at run time.

Dynamically constructed statements are invoked with the execute command:

execute command-string [ INTO [STRICT] target ] [ USING expression [, ... ] ];

which for our specific needs would look like

  execute 'create role '
    || new.login_name
    || ' with login inherit nosuperuser nocreatedb nocreaterole password '
    || quote_literal(new.login_pass);

where the quote_literal function returns the string argument suitably quoted for use as a string literal so as to comply with the syntactical requirement that the password in fact be quoted..

Once we have the command string built, we supply it as the argument to the pl/pgsql execute command within the trigger function.

Putting this all together looks like:

create or replace function person_iit()
  returns trigger
  set schema 'public'
  language plpgsql
  security definer
  as $$
  begin

  -- note this is for demonstration only. it is vulnerable to sql injection.

  execute 'create role '
    || new.login_name
    || ' with login inherit nosuperuser nocreatedb nocreaterole password '
    || quote_literal(new.login_pass);

  return new;
  end;
  $$;

Let’s Try It!

Everything is in place, so let’s give it whirl! First we switch session authorization to the anonymous user and then do an insert against the person view:

set session authorization anonymous;
insert into person values ('alice', '1234');

The result is that new user alice has been added to the system table:

\du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 alice     |                                                            | {}
 anonymous | No inheritance                                             | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

It even works directly from the operating system command line by piping a SQL command string to the psql client utility to add user bob:

$ psql sampledb anonymous <<< "insert into person values ('bob', '4321');"
INSERT 0 1

$ psql sampledb anonymous <<< "\du"
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 alice     |                                                            | {}
 anonymous | No inheritance                                             | {}
 bob       |                                                            | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

Apply Some Armor

The initial example of the trigger function is vulnerable to SQL injection attack, i.e. a malicious threat actor could craft input that results in unauthorized access. For example, while connected as the anonymous user role, an attempt to do something out of scope fails appropriately:

set session authorization anonymous;
drop user alice;
ERROR:  permission denied to drop role

But the following malicious input creates a superuser role named ‘eve’ (as well as a decoy account named ‘cathy’):

insert into person 
  values ('eve with superuser login password ''666''; create role cathy', '777');
\du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 alice     |                                                            | {}
 anonymous | No inheritance                                             | {}
 cathy     |                                                            | {}
 eve       | Superuser                                                  | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

Then the surreptitious superuser role can be used to wreak havoc in the database, for example deleting user accounts (or worse!):

\c - eve
drop user alice;
\du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 anonymous | No inheritance                                             | {}
 cathy     |                                                            | {}
 eve       | Superuser                                                  | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

To mitigate this vulnerability, we must take steps to sanitize the input. For example, applying the quote_ident function, which returns a string suitably quoted for use as an identifier in an SQL statement with quotes added when necessary, such as if the string contains non-identifier characters or would be case-folded, and properly doubling embedded quotes:

create or replace function person_iit()
  returns trigger
  set schema 'public'
  language plpgsql
  security definer
  as $$
  begin

  execute 'create role '
    || quote_ident(new.login_name)
    || ' with login inherit nosuperuser nocreatedb nocreaterole password '
    || quote_literal(new.login_pass);

  return new;
  end;
  $$;

Now if the same SQL injection exploit is attempted to create another superuser named ‘frank’, it fails, and the result is a very unorthodox username:

set session authorization anonymous;
insert into person 
  values ('frank with superuser login password ''666''; create role dave', '777');
\du
                                 List of roles
    Role name          |                         Attributes                         | Member of 
-----------------------+------------------------------------------------------------+----------
 anonymous             | No inheritance                                             | {}
 eve                   | Superuser                                                  | {}
 frank with superuser  |                                                            |
  login password '666';|                                                            |
  create role dave     |                                                            |
 postgres              | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

We can apply further sensible data validation within the trigger function such as requiring only alpha-numeric usernames and rejecting white space and other characters:

create or replace function person_iit()
  returns trigger
  set schema 'public'
  language plpgsql
  security definer
  as $$
  begin

  -- Basic input sanitization

  if new.login_name is null then
    raise exception 'null login_name disallowed';
  elsif position('' in new.login_name) > 0 then
    raise exception 'login_name whitespace disallowed';
  elsif length(new.login_name) = 0 then
    raise exception 'login_name must be non-empty';
  elsif not (select new.login_name similar to '[A-Za-z]%') then
    raise exception 'login_name must begin with a letter.';
  end if;

  if new.login_pass is null then
    raise exception 'null login_pass disallowed';
  elsif position('' in new.login_pass) > 0 then
    raise exception 'login_pass whitespace disallowed';
  elsif length(new.login_pass) = 0 then
    raise exception 'login_pass must be non-empty';
  end if;

  -- Provision login credentials

  execute 'create role '
    || quote_ident(new.login_name)
    || ' with login inherit nosuperuser nocreatedb nocreaterole password '
    || quote_literal(new.login_pass);

  return new;
  end;
  $$;

and then confirm that the various sanitization checks work:

set session authorization anonymous;
insert into person values (NULL, NULL);
ERROR:  null login_name disallowed
insert into person values ('gina', NULL);
ERROR:  null login_pass disallowed
insert into person values ('gina', '');
ERROR:  login_pass must be non-empty
insert into person values ('', '1234');
ERROR:  login_name must be non-empty
insert into person values ('gi na', '1234');
ERROR:  login_name whitespace disallowed
insert into person values ('1gina', '1234');
ERROR:  login_name must begin with a letter.

Let’s Step it Up a Notch

Suppose we want to store additional metadata or application data related to the created user role, e.g., maybe a time stamp and source IP address associated with role creation. The view of course cannot satisfy this new requirement since there is no underlying storage, so an actual table is required. Also, let’s further assume we want to restrict visibility of that table from users logging in with the anonymous login role. We can hide the table in a separate namespace (i.e., a PostgreSQL schema) which remains inaccessible to anonymous users. Let’s call this namespace the “private” namespace and create the table in the namespace:

create schema private;

create table private.person (
  login_name   name not null primary key,
  inet_client_addr inet default inet_client_addr(),
  create_time timestamptz default now()  
);

A simple additional insert command inside the trigger function records this associated metadata:

create or replace function person_iit()
  returns trigger
  set schema 'public'
  language plpgsql
  security definer
  as $$
  begin

  -- Basic input sanitization
  if new.login_name is null then
    raise exception 'null login_name disallowed';
  elsif position('' in new.login_name) > 0 then
    raise exception 'login_name whitespace disallowed';
  elsif length(new.login_name) = 0 then
    raise exception 'login_name must be non-empty';
  elsif not (select new.login_name similar to '[A-Za-z]%') then
    raise exception 'login_name must begin with a letter.';
  end if;

  if new.login_pass is null then
    raise exception 'null login_pass disallowed';
  elsif length(new.login_pass) = 0 then
    raise exception 'login_pass must be non-empty';
  end if;

  -- Record associated metadata
  insert into private.person values (new.login_name);

  -- Provision login credentials

  execute 'create role '
    || quote_ident(new.login_name)
    || ' with login inherit nosuperuser nocreatedb nocreaterole password '
    || quote_literal(new.login_pass);

  return new;
  end;
  $$;

And we can give it an easy test. First we confirm that while connected as the anonymous role only the public.person view is visible and not the private.person table:

set session authorization anonymous;

\d
         List of relations
 Schema |  Name  | Type |  Owner   
--------+--------+------+----------
 public | person | view | postgres
(1 row)
                   
select * from private.person;
ERROR:  permission denied for schema private

And then after a new role insert:

insert into person values ('gina', '1234');

reset session authorization;

select * from private.person;
 login_name | inet_client_addr |          create_time          
------------+------------------+-------------------------------
 gina       | 192.168.2.106    | 2018-06-24 07:56:13.838679-07
(1 row)

the private.person table shows the metadata capture for IP address and the row insert time.

Conclusion

In this article, we have demonstrated a technique to delegate PostgreSQL role credential provisioning to non-superuser roles. While the example fully delegated the credentialing functionality to anonymous users, a similar approach could be used to partially delegate the functionality to only trusted personnel while still retaining the benefit of off-loading this work from high-value database or systems administrator personnel. We also demonstrated a technique of layered data access utilizing PostgreSQL schemas, selectively exposing or hiding database objects. In the next article in this series we will expand on the layered data access technique to propose a novel database architecture design for application implementations.

Database Security Monitoring for MySQL and MariaDB

$
0
0

Data protection is one of the most significant aspects of administering a database. Depending on the organizational structure, whether you are a developer, sysadmin or DBA, if you are managing the production database, you must monitor data for unauthorized access and usage. The purpose of security monitoring is twofold. One, to identify unauthorised activity on the database. And two, to check if databases ´and their configurations on a company-wide basis are compliant with security policies and standards.

In this article, we will divide monitoring for security in two categories. One will be related to auditing of MySQL and MariaDB databases activities. The second category will be about monitoring your instances for potential security gaps.

Query and connection policy-based monitoring

Continuous auditing is an imperative task for monitoring your database environment. By auditing your database, you can achieve accountability for actions taken or content accessed. Moreover, the audit may include some critical system components, such as the ones associated with financial data to support a precise set of regulations like SOX, or the EU GDPR regulation. Usually, it is achieved by logging information about DB operations on the database to an external log file.

By default, auditing in MySQL or MariaDB is disabled. You and achieve it by installing additional plugins or by capturing all queries with the query_log parameter. The general query log file is a general record of what MySQL is performing. The server records some information to this log when clients connect or disconnect, and it logs each SQL statement received from clients. Due to performance issues and lack of configuration options, the general_log is not a good solution for security audit purposes.

If you use MySQL Enterprise, you can use the MySQL Enterprise Audit plugin which is an extension to the proprietary MySQL version. MySQL Enterprise Audit Plugin plugin is only available with MySQL Enterprise, which is a commercial offering from Oracle. Percona and MariaDB have created their own open source versions of the audit plugin. Lastly, McAfee plugin for MySQL can also be used with various versions of MySQL. In this article, we will focus on the open source plugins, although the Enterprise version from Oracle seems to be the most robust and stable.

Characteristics of MySQL open source audit plugins

While the open source audit plugins do the same job as the Enterprise plugin from Oracle - they produce output with database query and connections - there are some major architectural differences.

MariaDB Audit Plugin – The MariaDB Audit Plugin works with MariaDB, MySQL (as of version 5.5.34 and 10.0.7) and Percona Server. MariaDB started including the Audit Plugin by default from versions 10.0.10 and 5.5.37, and it can be installed in any version from MariaDB 5.5.20. It is the only plugin that supports Oracle MySQL, Percona Server, and MariaDB. It is available on Windows and Linux platform. Versions starting from 1.2 are most stable, and it may be risky to use versions below that in your production environment.

McAfee MySQL Audit Plugin – This plugin does not use MySQL audit API. It was recently updated to support MySQL 5.7. Some tests show that API based plugins may provide better performance but you need to check it with your environment.

Percona Audit Log Plugin – Percona provides an open source auditing solution that installs with Percona Server 5.5.37+ and 5.6.17+ as part of the installation process. Comparing to other open source plugins, this plugin has more reach output features as it outputs XML, JSON and to syslog.

As it has some internal hooks to the server to be feature-compatible with Oracle’s plugin, it is not available as a standalone plugin for other versions of MySQL.

Plugin installation based on MariaDB audit extension

The installation of open source MySQL plugins is quite similar for MariaDB, Percona, and McAfee versions.
Percona and MariaDB add their plugins as part of the default server binaries, so there is no need to download plugins separately. The Percona version only officially supports it’s own fork of MySQL so there is no direct download from the vendor's website ( if you want to use this plugin with MySQL, you will have to obtain the plugin from a Percona server package). If you would like to use the MariaDB plugin with other forks of MySQL, then you can find it from https://downloads.mariadb.com/Audit-Plugin/MariaDB-Audit-Plugin/. The McAfee plugin is available at https://github.com/mcafee/mysql-audit/wiki/Installation.

Before you start the plugin installation, you can check if the plugin is present in the system. The dynamic plugin (doesn’t require instance restart) location can be checked with:

SHOW GLOBAL VARIABLES LIKE 'plugin_dir';

+---------------+--------------------------+
| Variable_name | Value                    |
+---------------+--------------------------+
| plugin_dir    | /usr/lib64/mysql/plugin/ |
+---------------+--------------------------+

Check the directory returned at the filesystem level to make sure you have a copy of the plugin library. If you do not have server_audit.so or server_audit.dll inside of /usr/lib64/mysql/plugin/, then more likely your MariaDB version is not supported and should upgrade it to latest version..

The syntax to install the MariaDB plugin is:

INSTALL SONAME 'server_audit';

To check installed plugins you need to run:

SHOW PLUGINS;
MariaDB [(none)]> show plugins;
+-------------------------------+----------+--------------------+--------------------+---------+
| Name                          | Status   | Type               | Library            | License |
+-------------------------------+----------+--------------------+--------------------+---------+
| binlog                        | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| mysql_native_password         | ACTIVE   | AUTHENTICATION     | NULL               | GPL     |
| mysql_old_password            | ACTIVE   | AUTHENTICATION     | NULL               | GPL     |
| wsrep                         | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| MRG_MyISAM                    | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| MEMORY                        | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| CSV                           | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| MyISAM                        | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| CLIENT_STATISTICS             | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| INDEX_STATISTICS              | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| TABLE_STATISTICS              | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| USER_STATISTICS               | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| PERFORMANCE_SCHEMA            | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| InnoDB                        | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| INNODB_TRX                    | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| INNODB_LOCKS                  | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| INNODB_LOCK_WAITS             | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| INNODB_CMP                    | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
...
| INNODB_MUTEXES                | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| INNODB_SYS_SEMAPHORE_WAITS    | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| INNODB_TABLESPACES_ENCRYPTION | ACTIVE   | INFORMATION SCHEMA | NULL               | BSD     |
| INNODB_TABLESPACES_SCRUBBING  | ACTIVE   | INFORMATION SCHEMA | NULL               | BSD     |
| Aria                          | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| SEQUENCE                      | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| user_variables                | ACTIVE   | INFORMATION SCHEMA | NULL               | GPL     |
| FEEDBACK                      | DISABLED | INFORMATION SCHEMA | NULL               | GPL     |
| partition                     | ACTIVE   | STORAGE ENGINE     | NULL               | GPL     |
| rpl_semi_sync_master          | ACTIVE   | REPLICATION        | semisync_master.so | GPL     |
| rpl_semi_sync_slave           | ACTIVE   | REPLICATION        | semisync_slave.so  | GPL     |
| SERVER_AUDIT                  | ACTIVE   | AUDIT              | server_audit.so    | GPL     |
+-------------------------------+----------+--------------------+--------------------+---------+

If you need additional information, check the PLUGINS table in the information_schema database which contains more detailed information.

Another way to install the plugin is to enable the plugin in my.cnf and restart the instance. An example of a basic audit plugin configuration from MariaDB could be :

server_audit_events=CONNECT
server_audit_file_path=/var/log/mysql/audit.log
server_audit_file_rotate_size=1073741824
server_audit_file_rotations=8
server_audit_logging=ON
server_audit_incl_users=
server_audit_excl_users=
server_audit_output_type=FILE
server_audit_query_log_limit=1024

Above setting should be placed in my.cnf. Audit plugin will create file /var/log/mysql/audit.log which will rotate on size 1GB and there will be eight rotations until the file is overwritten. The file will contain only information about connections.

Currently, there are sixteen settings which you can use to adjust the MariaDB audit plugin.

server_audit_events
server_audit_excl_users
server_audit_file_path
server_audit_file_rotate_now
server_audit_file_rotate_size
server_audit_file_rotations
server_audit_incl_users
server_audit_loc_info
server_audit_logging
server_audit_mode
server_audit_output_type
Server_audit_query_log_limit
server_audit_syslog_facility
server_audit_syslog_ident
server_audit_syslog_info
server_audit_syslog_priority

Among them, you can find options to include or exclude users, set different logging events (CONNECT or QUERY) and switch between file and syslog.

To make sure the plugin will be enabled upon server startup, you have to set
plugin_load=server_audit=server_audit.so in your my.cnf settings. Such configuration can be additionally protected by server_audit=FORCE_PLUS_PERMANENT which will disable the plugin uninstall option.

UNINSTALL PLUGIN server_audit;

ERROR 1702 (HY000):
Plugin 'server_audit' is force_plus_permanent and can not be unloaded

Here is some sample entries produced by MariaDB audit plugin:

20180817 20:00:01,slave,cmon,cmon,31,0,DISCONNECT,information_schema,,0
20180817 20:47:01,slave,cmon,cmon,17,0,DISCONNECT,information_schema,,0
20180817 20:47:02,slave,cmon,cmon,19,0,DISCONNECT,information_schema,,0
20180817 20:47:02,slave,cmon,cmon,18,0,DISCONNECT,information_schema,,0
20180819 17:19:19,slave,cmon,cmon,12,0,CONNECT,information_schema,,0
20180819 17:19:19,slave,root,localhost,13,0,FAILED_CONNECT,,,1045
20180819 17:19:19,slave,root,localhost,13,0,DISCONNECT,,,0
20180819 17:19:20,slave,cmon,cmon,14,0,CONNECT,mysql,,0
20180819 17:19:20,slave,cmon,cmon,14,0,DISCONNECT,mysql,,0
20180819 17:19:21,slave,cmon,cmon,15,0,CONNECT,information_schema,,0
20180819 17:19:21,slave,cmon,cmon,16,0,CONNECT,information_schema,,0
20180819 19:00:01,slave,cmon,cmon,17,0,CONNECT,information_schema,,0
20180819 19:00:01,slave,cmon,cmon,17,0,DISCONNECT,information_schema,,0

Schema changes report

If you need to track only DDL changes, you can use the ClusterControl Operational Report on Schema Change. The Schema Change Detection Report shows any DDL changes on your database. This functionality requires an additional parameter in ClusterControl configuration file. If this is not set you will see the following information: schema_change_detection_address is not set in /etc/cmon.d/cmon_1.cnf. Once that is in place an example output may be like below:

It can be set up with a schedule, and the reports emailed to recipients.

ClusterControl: Schedule Operational Report
ClusterControl: Schedule Operational Report

MySQL Database Security Assessment

Package upgrade check

First, we will start with security checks. Being up-to-date with MySQL patches will help reduce risks associated with known vulnerabilities present in the MySQL server. You can keep your environment up-to-date by using the vendors’ package repository. Based on this information you can build your own reports, or use tools like ClusterControl to verify your environment and alert you on possible updates.

ClusterControl Upgrade Report gathers information from the operating system and compares them to packages available in the repository. The report is divided into four sections; upgrade summary, database packages, security packages, and other packages. You can quickly compare what you have installed on your system and find a recommended upgrade or patch.

ClusterControl: Upgrade Report
ClusterControl: Upgrade Report
ClusterControl: Upgrade Report details
ClusterControl: Upgrade Report details

To compare them manually you can run

SHOW VARIABLES WHERE variable_name LIKE "version";

With security bulletins like:
https://www.oracle.com/technetwork/topics/security/alerts-086861.html
https://nvd.nist.gov/view/vuln/search-results?adv_search=true&cves=on&cpe_vendor=cpe%3a%2f%3aoracle&cpe_produ
https://www.percona.com/doc/percona-server/LATEST/release-notes/release-notes_index.html
https://downloads.mariadb.org/mariadb/+releases/
https://www.cvedetails.com/vulnerability-list/vendor_id-12010/Mariadb.html
https://www.cvedetails.com/vulnerability-list/vendor_id-13000/Percona.html

Or vendor repositories:

On Debian

sudo apt list mysql-server

On RHEL/Centos

yum list | grep -i mariadb-server

Accounts without password

Blank passwords allow a user to login without using a password. MySQL used to come with a set of pre-created users, some of which can connect to the database without password or, even worse, anonymous users. Fortunately, this has changed in MySQL 5.7. Finally, it comes only with a root account that uses the password you choose at installation time.

For each row returned from the audit procedure, set a password:

SELECT User,host
FROM mysql.user
WHERE authentication_string='';

Additionally, you can install a password validation plugin and implement a more secure policy:

INSTALL PLUGIN validate_password SONAME 'validate_password.so';

SHOW VARIABLES LIKE 'default_password_lifetime';
SHOW VARIABLES LIKE 'validate_password%';

An good start can be:

plugin-load=validate_password.so
validate-password=FORCE_PLUS_PERMANENT
validate_password_length=14
validate_password_mixed_case_count=1
validate_password_number_count=1
validate_password_special_char_count=1
validate_password_policy=MEDIUM

Of course, these settings will depend on your business needs.

Remote access monitoring

Avoiding the use of wildcards within hostnames helps control the specific locations from which a given user may connect to and interact with the database.

You should make sure that every user can connect to MySQL only from specific hosts. You can always define several entries for the same user, this should help to reduce a need for wildcards.

Execute the following SQL statement to assess this recommendation (make sure no rows are returned):

SELECT user, host FROM mysql.user WHERE host = '%';

Test database

The default MySQL installation comes with an unused database called test and the test database is available to every user, especially to the anonymous users. Such users can create tables and write to them. This can potentially become a problem on its own - and writes would add some overhead and reduce database performance. It is recommended that the test database is dropped. To determine if the test database is present, run:

SHOW DATABASES LIKE 'test';

If you notice that the test database is present, this could be that mysql_secure_installation script which drops the test database (as well as other security-related activities) was not executed.

LOAD DATA INFILE

If both server and client has the ability to run LOAD DATA LOCAL INFILE, a client will be able to load data from a local file to a remote MySQL server. The local_infile parameter dictates whether files located on the MySQL client's computer can be loaded or selected via LOAD DATA INFILE or SELECT local_file.

This, potentially, can help to read files the client has access to - for example, on an application server, one could access any data that the HTTP server has access to. To avoid it, you need to set local-infile=0 in my.cnf.

Execute the following SQL statement and ensure the Value field is set to OFF:

SHOW VARIABLES WHERE Variable_name = 'local_infile';

Monitor for non-encrypted tablespaces

Starting from MySQL 5.7.11, InnoDB supports data encryption for tables stored in file-per-table tablespaces. This feature provides at-rest encryption for physical tablespace data files. To examine if your tables have been encrypted run:

mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES
       WHERE CREATE_OPTIONS LIKE '%ENCRYPTION="Y"%';

+--------------+------------+----------------+
| TABLE_SCHEMA | TABLE_NAME | CREATE_OPTIONS |
+--------------+------------+----------------+
| test         | t1         | ENCRYPTION="Y" |
+--------------+------------+----------------+

As a part of the encryption, you should also consider encryption of the binary log. The MySQL server writes plenty of information to binary logs.

Encryption connection validation

In some setups, the database should not be accessible through the network if every connection is managed locally, through the Unix socket. In such cases, you can add the ‘skip-networking’ variable in my.cnf. Skip-networking prevents MySQL from using any TCP/IP connection, and only Unix socket would be possible on Linux.

However this is rather rare situation as it is common to access MySQL over the network. You then need to monitor that your connections are encrypted. MySQL supports SSL as a means to encrypting traffic both between MySQL servers (replication) and between MySQL servers and clients. If you use Galera cluster, similar features are available - both intra-cluster communication and connections with clients can be encrypted using SSL. To check if you use SSL encryption run the following queries:

SHOW variables WHERE variable_name = 'have_ssl'; 
select ssl_verify_server_cert from mysql.slave_master_info;

That’s it for now. This is not a complete list, do let us know if there are any other checks that you are doing today on your production databases.

ClusterControl Developer Studio: Creating an Advisor to check for SELinux and Meltdown/Spectre Part 2

$
0
0

In part 1 of this blog, we showed you how to integrate a basic check with SELinux modes. In this part 2 blog, we’ll discuss and go over with Meltdown/Spectre integration to our Advisors, set an alarm, and then see how to debug the script.

Let’s have a quick review of the Meltdown/Spectre vulnerability first. Currently, there are 8 variants that this CVE is covering. These are,

Some of these vulnerabilities or the so-called L1 Terminal Fault (L1TF) which is a speculative execution side-channel attack were just recently discovered, in contrast to the other variants Meltdown/Spectre. So let’s go ahead and integrate this in our Advisor, and scan our servers to identify if any is affected or not, and trigger an Alarm if required.

ClusterControl Advisor Implementation and Shell Script Invocation

First, we’ll setup the shell script. I’m using this implementation from Stéphane Lesimple (a.k.a. speed47). In my MySQL nodes, I run the following bash commands below:

$ sudo wget https://meltdown.ovh -O /usr/bin/spectre-meltdown-checker.sh
$ sudo chmod +x /usr/bin/spectre-meltdown-checker.sh

Now, let’s create the new file as follows:

Then, let’s copy the code that I have in my own Github repository and paste it into Cluster > Manage > Developer Studio. Then open the file we have just created under the myadvisors/host/spectre-meltdown-checker.js as seen below, and paste the code there.

Before going through the code, let’s discuss first the logical flow that we want to achieve.

Now, let’s dig into the code and explain what we are going to achieve. In lines 16 - 25, we have just declared the myAlarm(<title>, <message>, <recommendation>) function. This function will just return an integer which does the alarmId(<category>, <isHost>, <title>, <message>, <recommendation>) function returns. The returned value can be used when creating an alarm.

function myAlarm(title, message, recommendation)
{
  return Alarm::alarmId(
        Node,
      true,
        title,
        message,
        recommendation
  );
}

Now let’s skip some lines and go to the important ones. In line 48 - 50, we’re calling

var shell = "test -e /usr/bin/spectre-meltdown-checker.sh";
var retval = host.system(shell);
var reply = retval["result"];

We’re checking if the spectre-meltdown-checker.sh has been setup and installed at /usr/bin path. Lines 56 - 72, this is a failback in case we don’t find the script, notify the user through Advisors page that this has to be installed and setup. If found that it has been setup properly, lines 75 - 85 will check if a report has been generated and it looks as follows:

var dateTime = CmonDateTime::currentDateTime();
var dateToday = dateTime.toString(ShortDateFormat);
dateToday = dateToday.replace("/", "-");
var todayLogFile = "/tmp/spectre-meltdown" + "_" + dateToday + ".log";

shell = "test -e " + todayLogFile; // if /tmp/spectre-metldown_m.yy.dd.log exist.
retval = host.system(shell);

shell = "wc -l " + todayLogFile + "|awk -F '''{print $1}'|tr -d '\n'"; // check if the file has contents
var retvalLogContents = host.system(shell);
replyLogContents = retvalLogContents["result"];

If it has no generated result yet, then the lines 90 - 99 handles the generation of report by such calling the spectre-meltdown-check.sh. Let’s see below,

// Let's generate result if command fails and log do not exist.
shell = "sudo nohup /usr/bin/spectre-meltdown-checker.sh --batch json > " + todayLogFile;
retval = host.system(shell);
reply = retval["result"];
var jsonMetldownCheckerReply = JSON::parse(reply.toString());
var errTimeoutMsg = "ssh timeout: Execution timeout";


print(shell);                
print ("retval" + retval);
print("jsonMetldownCheckerReply " + retval);

Now line 90 shows that we are calling a nohup (no hang up) for /usr/bin/spectre-meltdown-checker.sh and passing --batch json to format the output as a JSON formatted output. Why using nohup? Well, CCDSL has its own limitation and we’re setting a 5-second call for such host.system() command calls, thus the return message would be like

ssh timeout: Execution timeout 5 secs has reached (channel_closing).

We’re planning to make this a configurable value, but we have to live with it for now.

Since we’re going to generate a report, the lines 115 - 128 will handle a notification in case the generated log file hasn’t yet been created and notify the user to come back after a minute. You can also manually run the advisor by clicking the button for the spectre-meltdown-checker.js script we have in the Developer Studio.

We’re almost done now with the script, let’s go further down. Lines 130 - 144,

shell = "cat " + todayLogFile + " | tr -d '\n'";
retval = host.system(shell);
reply = retval["result"];
jsonStr = '{"list":' + reply.toString() + '}';
mapSpectreMeltdownResult = JSON::parse(jsonStr);

for (ndex=0; ndex < mapSpectreMeltdownResult["list"].size(); ndex++) {
    if (mapSpectreMeltdownResult["list"][ndex]["VULNERABLE"]) {
        msg  += "<br />Host is affected by " 
             + mapSpectreMeltdownResult["list"][ndex]["CVE"] + "/"
             + mapSpectreMeltdownResult["list"][ndex]["NAME"] + ". "
             + "Suggested action: &quot;" + mapSpectreMeltdownResult["list"][ndex]["INFOS"] + "&quot;";
        
    } 
}

Let’s have a look at that. Line 130 - 131 does a shell invocation in which we just invoke the Linux command using cat command. After fetching the result with cat command, we parse the result again as a JSON array format with keyname == “list”. One of CCDSL’s limitation as of this time is that it is not able to handle JSON formatted array when it starts and ends with squared brackets “[“ and “]”. However, there’s a work around, as enclosing with curly braces resolves the case here. Then lines 136 - 144 is doing a for loop in order to handle the multi-arrayed values of variable mapSpectreMeltdownResult. This is where we are storing the contents taken from the generated report while we parse it.

Setting up the Alarm

Now, since we’re almost in the end of the code. Go to lines 146 - 156,

if (msg.length()) {
    var recommendation = "We advise to update your kernel to the latest version ""or check your Linux Distro and see the recent updates about this CVE.";
    advice.setSeverity(Warning);
    advice.setJustification(msg);
    advice.setAdvice(recommendation);

     myAlarmId = myAlarm("Metldown/Spectre Affected!", msg, recommendation);
     // Let's raise an alarm.
     host.raiseAlarm(myAlarmId, Warning);
}

we check the message length if any (which means there’s a vulnerability detected on the host), then set the Advisor advice now which is found in 149 - 151. Now on 153 - 155, you’ll see that we have setup the Alarm. It’s pretty easy eh? The line 153 calls the function that we have defined above the code which was discussed earlier at lines 16 - 25. By just calling host.raiseAlarm(<type>, <severity>, <[message]>) method, we are able to raise an Alarm which would look like below:

Once you click on the warning, it’ll show as follows:

and clicking the “Full Alarm Details”, it’ll reveal all of the messages as follows:

Since we have setup the alarm, once you have this script compiled and run under Developer Studio, schedule this Advisor. To quickly check the result and see it in action, you can try to schedule it, let say, every 5 minutes or 10 minutes since the script will run only once per-day as long as the generated report exists. This is how it looks like:

Isn’t it nice? This is cool stuff, we’re able to integrate security checks like these Spectre/Meltdown vulnerabilities. It’s pretty easy though specially if you have Javascript background but regardless of the familiarity, it would be easy to handle this one as there are plenty of examples we have in our Github repository.

Debugging the Advisor Script

The CCDSL is not that rich and sophisticated kind of language but it does provide the ways to debug your script or there’s a tool to achieve this.

Let’s go back to lines 97 - 99,

print(shell);                
print ("retval" + retval);
print("jsonMetldownCheckerReply " + retval);

These allows us to print the value of the variables on what it holds and what it contains. It’s not pretty but it works and does what we need. For that example, it’ll show the following result in the Message tab of the Developer Studio:

Debugging with ClusterControl CLI tools

Aside from using “print()” function, we can actually use the CLI tools to run the .js file. For example, let’s try and debug the JavaScript file.

[root@ccnode vagrant]# s9s script --execute --cluster-id=2  -u admin --password=b8be2b56-80f9-45c7-a248-65ee6744a12f --print-json spectre-meltdown-checker.js
{
    "controller_id": "clustercontrol",
    "reply_received": "2018-09-28T23:39:40.084Z",
    "request_created": "2018-09-28T23:39:40.082Z",
    "request_id": 2,
    "request_processed": "2018-09-28T23:39:40.087Z",
    "request_status": "Ok",
    "request_user_id": 3,
    "results": 
    {
        "exitStatus": "null",
        "fileName": "spectre-meltdown-checker.js",
        "messages": [ 
        {
            "lineNumber": 16,
            "message": "spectre-meltdown-checker.js:16: syntax error.",
            "severity": "error"
        } ],
        "status": "ParseError"
    }
}

Let me explain what we’re doing here.

  1. I uploaded the file named spectre-meltdown-checker.js to the ClusterControl monitor host
  2. Locate your username and password which you can find in /etc/s9s.conf. Take note that, you have to open or read the file with sudo privilege since it’s owned by root
    e.g. content of /etc/s9s.conf
    #
    # Configuration file created by the Cmon Controller for the
    # s9s command line tool. Please feel free to edit or 
    # remove this file
    #
    [global]
    controller    = https://localhost:9501
    cmon_user     = "admin"
    cmon_password = "b8be2b56-80f9-45c7-a248-65ee6744a12f"
  3. Run the command as follows,
    $ s9s script --execute --cluster-id=<your-cluster-id>  -u <your-admin-username> --password=<your-admin-password> --print-json spectre-meltdown-checker.js

Based on the example I have, I intentionally make an error on the script to show you how it works. We can even play around and get with the JSON values and pipe with python.

e.g.

// Let’s get the advice value
[root@ccnode vagrant]# s9s script --execute --cluster-id=2  -u admin --password=b8be2b56-80f9-45c7-a248-65ee6744a12f --print-json spectre-meltdown-checker.js | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["results"]["exitStatus"]["0"]["advice"]';
We advise to update your kernel to the latest version or check your Linux Distro and see the recent updates about this CVE.

To get the contents of the message, you can even do like this,

e.g.

[root@ccnode vagrant]# s9s script --execute --cluster-id=2  -u admin --password=b8be2b56-80f9-45c7-a248-65ee6744a12f --print-json spectre-meltdown-checker.js | python -c 'import json,sys;obj=json.load(sys.stdin);size=len(obj["results"]["messages"]); map(lambda i: sys.stdout.write(obj["results"]["messages"][i]["message"] + "\n"), range(size))';
   
192.168.70.10
==========================
Meltdown/Spectre Check
We advise to update your kernel to the latest version or check your Linux Distro and see the recent updates about this CVE.
<br />Host is affected by CVE-2017-5753/SPECTRE VARIANT 1. Suggested action: &quot;Kernel source needs to be patched to mitigate the vulnerability&quot;<br />Host is affected by CVE-2017-5715/SPECTRE VARIANT 2. Suggested action: &quot;IBRS+IBPB or retpoline+IBPB is needed to mitigate the vulnerability&quot;<br />Host is affected by CVE-2017-5754/MELTDOWN. Suggested action: &quot;PTI is needed to mitigate the vulnerability&quot;<br />Host is affected by CVE-2018-3640/VARIANT 3A. Suggested action: &quot;an up-to-date CPU microcode is needed to mitigate this vulnerability&quot;<br />Host is affected by CVE-2018-3639/VARIANT 4. Suggested action: &quot;Neither your CPU nor your kernel support SSBD&quot;<br />Host is affected by CVE-2018-3620/VARIANT 4. Suggested action: &quot;Your kernel doesn't support PTE inversion, update it&quot;

   
192.168.70.20
==========================
Meltdown/Spectre Check
We advise to update your kernel to the latest version or check your Linux Distro and see the recent updates about this CVE.
<br />Host is affected by CVE-2017-5753/SPECTRE VARIANT 1. Suggested action: &quot;Kernel source needs to be patched to mitigate the vulnerability&quot;<br />Host is affected by CVE-2017-5715/SPECTRE VARIANT 2. Suggested action: &quot;IBRS+IBPB or retpoline+IBPB is needed to mitigate the vulnerability&quot;<br />Host is affected by CVE-2017-5754/MELTDOWN. Suggested action: &quot;PTI is needed to mitigate the vulnerability&quot;<br />Host is affected by CVE-2018-3640/VARIANT 3A. Suggested action: &quot;an up-to-date CPU microcode is needed to mitigate this vulnerability&quot;<br />Host is affected by CVE-2018-3639/VARIANT 4. Suggested action: &quot;Neither your CPU nor your kernel support SSBD&quot;<br />Host is affected by CVE-2018-3620/VARIANT 4. Suggested action: &quot;Your kernel doesn't support PTE inversion, update it&quot;

   
192.168.70.30
==========================
Meltdown/Spectre Check
Script Setup/Installation required!
<br />Empty reply from the command. It looks like you have not setup the shell script yet. You can get the shell script from https://meltdown.ovh<br /> or run &quot;sudo wget https://meltdown.ovh -O /usr/bin/spectre-meltdown-checker.sh; sudo chmod +x /usr/bin/spectre-meltdown-checker.sh&quot;

Conclusion

The scripts we have created for selinux-checker.js and spectre-meltdown-checker.js are simple implementations, it shows you how you can integrate these checks. For Spectre/Meltdown checks, we can even improve the implementation like to create an Advisor as well that will just remove the /tmp/spectre-meltdow*.log files in case the number of files goes high.

The ClusterControl Domain Specific Language is simple but powerful. There are plenty of ways to incorporate your tools or scripts into the Advisors. We’d like to hear more of your feedback and experiences.

Viewing all 75 articles
Browse latest View live