<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Elijah Paul]]></title><description><![CDATA[Linux, PHP and Web Stuff]]></description><link>https://elijahpaul.co.uk/</link><generator>Ghost 5.0</generator><lastBuildDate>Tue, 02 Sep 2025 22:45:08 GMT</lastBuildDate><atom:link href="https://elijahpaul.co.uk/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Deploying Cachet on Carina (by Rackspace) Docker Environment]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652340/cachet_carina_header2_gkvuxl.svg" alt loading="lazy"><br>
Building a status page is one of those tasks which seems to forever linger on the to-do list of so many small organizations (and some large too). Despite paid services such as <a href="https://status.io/pricing">status.io</a> and <a href="https://www.statuspage.io/pricing">statuspage.io</a> both offering plans starting at $29/mo (which admittedly most would consider reasonable)</p>]]></description><link>https://elijahpaul.co.uk/deploying-cachet-on-carina-by-rackspace-docker-environment/</link><guid isPermaLink="false">5f68100167fa6f0001d1955b</guid><category><![CDATA[status]]></category><category><![CDATA[uptime]]></category><category><![CDATA[docker]]></category><category><![CDATA[System Administrator]]></category><category><![CDATA[Cachet]]></category><category><![CDATA[Carina]]></category><category><![CDATA[Rackspace]]></category><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Mon, 02 Nov 2015 22:06:07 GMT</pubDate><media:content url="https://elijahpaul.co.uk/content/images/2015/11/cachet_carina_header.svg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://elijahpaul.co.uk/content/images/2015/11/cachet_carina_header.svg" alt="Deploying Cachet on Carina (by Rackspace) Docker Environment"><p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652340/cachet_carina_header2_gkvuxl.svg" alt="Deploying Cachet on Carina (by Rackspace) Docker Environment" loading="lazy"><br>
Building a status page is one of those tasks which seems to forever linger on the to-do list of so many small organizations (and some large too). Despite paid services such as <a href="https://status.io/pricing">status.io</a> and <a href="https://www.statuspage.io/pricing">statuspage.io</a> both offering plans starting at $29/mo (which admittedly most would consider reasonable), there are many who, for a variety of reasons, can&apos;t justify this cost. Personally I think both status.io and statuspage.io are great, however, their entry plans have always felt a little anaemic in the value for money dept. for my tastes.</p>
<p>Enter <a href="https://cachethq.io/">Cachet</a>. An opensource self-hosted alternative status page application. They say &apos;imitation is the sincerest form of flattery&apos;. Well it seems the Cachet <a href="https://github.com/jbrooksuk">developer</a>(s) took this to heart. Cachet has a similar look and feel to its commercial counterparts, and incorporates many of their features too. Just shy of <a href="https://james-brooks.uk/cachet/">a year old</a> it has made encouraging progress during this time, thanks in large part to its passionate and active development team.</p>
<table border="0" style="background-color:none;border-collapse:collapse;border:0px solid #555555;color:#333333;width:100%;text-align:center" cellpadding="6" cellspacing="3">
<tr>
<td><strong>Cachet</strong></td>
<td><strong>Status.io</strong></td>
<td><strong>Statuspage.io</strong></td>
</tr>
	<tr>
		<td><a href="https://demo.cachethq.io/" target="_blank"><img src="https://res.cloudinary.com/qunux/image/upload/v1446456845/cachet_ss_csllmb.png" alt="Deploying Cachet on Carina (by Rackspace) Docker Environment"></a></td>
		<td><a href="https://status.docker.com/" target="_blank"><img src="https://res.cloudinary.com/qunux/image/upload/v1446456845/status.io_ss_uyjvrt.png" alt="Deploying Cachet on Carina (by Rackspace) Docker Environment"></a></td>
		<td><a href="http://metastatuspage.com/" target="_blank"><img src="https://res.cloudinary.com/qunux/image/upload/v1446456841/statuspage.io_ss_kfmicf.png" alt="Deploying Cachet on Carina (by Rackspace) Docker Environment"></a></td>		
	</tr>
</table>
<p>So what does Cachet offer?</p>
<ul>
<li>JSON API</li>
<li>Metrics system, for graphing (e.g. website response times)</li>
<li>Multiple user support</li>
<li>Subscriber email notifications</li>
<li>Two-factor authentication, with Google Authenticator</li>
<li>Pretty, customizable (CSS), responsive Bootstrap 3 design</li>
<li>Translated into eleven languages</li>
<li>Cross-database support: MySQL, MariaDB, PostgreSQL and SQLite</li>
</ul>
<p><em>A quick feature comparison:</em></p>
<table border="1" style="background-color:none;border-collapse:collapse;border:1px solid #555555;color:#333333;width:100%;text-align:center" cellpadding="6" cellspacing="3">
	<tr>
		<td></td>
		<td><a href="https://cachethq.io" target="_blank">Cachet</a></td>
		<td><a href="http://staytus.co/" target="_blank">Staytus</a></td>
		<td><a href="https://status.io/" target="_blank">Status.io</a>*</td>
		<td><a href="https://www.statuspage.io/" target="_blank">StatusPage.io</a>*</td>
	</tr>
	<tr>
		<td><strong>Open Source</strong></td>
		<td>Yes</td>
		<td>Yes</td>
		<td>No</td>
		<td>No</td>
	</tr>
	<tr>
		<td><strong>Subscribers</strong></td>
		<td>Unlimited</td>
		<td>Unlimited</td>
		<td>300</td>
		<td>250</td>
	</tr>
	<tr>
		<td><strong>Team Members</strong></td>
		<td>Unlimited</td>
		<td>Unlimited</td>
		<td>3</td>
		<td>2</td>
	</tr>
	<tr>
		<td><strong>Metrics</strong></td>
		<td>Unlimited</td>
		<td>Unlimited</td>
		<td>3</td>
		<td>2</td>
	</tr>
	<tr>
		<td><strong>API</strong></td>
		<td>Yes</td>
		<td>Yes</td>
		<td>Yes</td>
		<td>Yes</td>
	</tr>
	<tr>
		<td><strong>Notifications /<br> Intergrations</strong></td>
		<td>Email, RSS</td>
		<td>Email</td>
		<td>Email, RSS, SMS,<br> Webhook, Twitter,<br> IRC, Slack, HipChat,<br> iCalendar</td>
		<td>Email, SMS, Webhook,<br> HipChat, Slack,<br>Campfire, Sqwiggle</td>
	</tr>
</table>
<p>*<em>Entry level plan @ $29/mo</em></p>
<p>Keep in mind that status.io and statuspage.io both offer extra features not listed in the table above, as well as providing their services on high availability, redundant and scalable infrastructure (across all price plans). So deploying Cachet may not meet everyone&apos;s requirements (just yet!).</p>
<p>However, I have no doubt (or high hopes at least) that Cachets feature list will continue to grow, eventually matching those of its commercial competitors.</p>
<h1 id="carina">Carina</h1>
<p>You may not have heard of <a href="https://getcarina.com/">Carina</a> yet since they literally <a href="https://getcarina.com/blog/announcing-carina/">launched a week ago</a>.</p>
<p>Carina is basically a Docker &amp; Docker Swarm container environment aimed at developers. It&apos;s currently in an open free beta, and so makes for an ideal platform on which to test/trial applications such as Cachet. Rackspace (the providers of Carina) have also indicated that <a href="https://getcarina.com/docs/faq/#how-long-will-carina-be-free-when-you-start-charging-what-will-it-cost">there will be a free tier available</a> even when they do start charging for the service.</p>
<h1 id="deploycachetoncarina">Deploy Cachet on Carina</h1>
<h3 id="setupyourcarinaaccountcluster">Setup your Carina Account &amp; Cluster</h3>
<p>First of all, if you haven&apos;t already, <a href="https://app.getcarina.com/app/signup">sign up for a free account</a> with Carina.</p>
<p>Follow the <strong>Create Your Cluster</strong> &amp; <strong>Connect to Your Cluster</strong> sections in the <a href="https://getcarina.com/docs/getting-started/getting-started-on-carina/">Getting Started on Carina</a> guide.</p>
<h3 id="deployadatabasecontainer">Deploy a Database Container</h3>
<p>Once you&apos;re connected to your Carina cluster, run a database container (you can either pass in environment variables for the DB, or mount a config with <code>-v /my/database.php:/var/www/html/config/database.php</code>):</p>
<pre><code>$ export DB_USERNAME=cachet
$ export DB_PASSWORD=cachet
$ export DB_ROOT_PASSWORD=cachet
$ export DB_DATABASE=cachet
$ docker run --name mariadb -e MYSQL_USER=$DB_USERNAME -e MYSQL_PASSWORD=$DB_PASSWORD  -e MYSQL_ROOT_PASSWORD=$DB_ROOT_PASSWORD -e MYSQL_DATABASE=$DB_DATABASE -d mariadb:latest
</code></pre>
<h3 id="deploythecachetcontainer">Deploy the Cachet Container</h3>
<p>You&apos;ve got a few options available when deploying the Cachet container.</p>
<p><strong>Option 1.</strong> Run Cachet (no SSL):</p>
<pre><code>docker run -d --name cachet --link mariadb:mariadb -p 80:8000 -e DB_HOST=mariadb -e DB_DATABASE=$DB_DATABASE -e DB_USERNAME=$DB_USERNAME -e DB_PASSWORD=$DB_PASSWORD cachethq/cachet:latest
</code></pre>
<blockquote>
<p>If you plan on running your Cachet deployment in production you&apos;ll need to ensure that you enable SSL. You can either install your certificates on a reverse-proxy or load balancer in front of your Cachet container or directly on the Cachet container itself via its Nginx configuration.</p>
</blockquote>
<p><strong>Option 2.</strong> Run Cachet (with SSL support/port binding):</p>
<pre><code>docker run -d --name cachet --link mariadb:mariadb -p 80:8000 -p 443:8003 -e DB_HOST=mariadb -e DB_DATABASE=$DB_DATABASE -e DB_USERNAME=$DB_USERNAME -e DB_PASSWORD=$DB_PASSWORD cachethq/cachet:latest
</code></pre>
<p><em>The above command will map/bind the host port 443 to port 8003 on your Cachet container. Later we&apos;ll reconfigure nginx on the container to redirect HTTPS traffic to this port</em></p>
<p><strong>Option 3.</strong> Run Cachet (with SSL support &amp; mail credential settings)</p>
<p>You can also pass smtp mail credentials (if you have them) at the same time you run your Cachet container. e.g. mailgun or mandrill:</p>
<pre><code>docker run -d --name cachet --link mariadb:mariadb -p 80:8000 -p 443:8003 -e DB_HOST=mariadb -e DB_DATABASE=$DB_DATABASE -e DB_USERNAME=$DB_USERNAME -e DB_PASSWORD=$DB_PASSWORD -e MAIL_HOST=smtp.mailgun.org -e MAIL_PORT=587 -e MAIL_USERNAME=you@yourdomain.com -e MAIL_PASSWORD=secret -e MAIL_ADDRESS=you@yourdoamin.com MAIL_NAME=&quot;Your Name&quot; cachethq/cachet:latest
</code></pre>
<h3 id="initializedatabaseandsetsecuritykey">Initialize Database and set Security Key</h3>
<p>Initialize the Database and set a Security Key if you haven&apos;t yet:</p>
<pre><code>$ docker exec -i cachet php artisan migrate --force
$ docker exec -i cachet php artisan key:generate
$ docker exec -i cachet php artisan config:cache
</code></pre>
<p>You can optionally install predis to enable usage of the various Redis drivers:</p>
<pre><code>$ docker exec -i cachet php composer.phar require predis/predis
</code></pre>
<p>Now go to <code>http://&lt;ipdockerisboundto&gt;/setup</code> and you&apos;ll be greeted with the setup page:</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652340/cachet_setup_1_opt_mswz4q.png" alt="Deploying Cachet on Carina (by Rackspace) Docker Environment" loading="lazy"><br>
From here, the remainder of the setup is quite straight forward.</p>
<h3 id="cachetsslsetup">Cachet SSL Setup</h3>
<p>If you chose to run your Cachet container using <strong>option 2 or 3</strong> above, you can now configure nginx for SSL.</p>
<p>Save the following configuration to a local file e.g. <code>default.conf</code><br>
<strong><mark>Don&apos;t forget to substitute in your own domain and certificate names where appropriate</mark></strong></p>
<pre><code>server {
    # This port is mapped to port 80 on the host
    listen 8000;

    # Your domain name
    server_name your_domain_name.com;

    # Redirects http requests to https
    return 301 https://$server_name$request_uri;
}
server {
    # This port is mapped to port 443 on the host
    listen 8003 default; ## Listen for ipv4; this line is default and implied

    ssl on;
    ssl_certificate /etc/nginx/ssl/your_domain_name.crt;
    ssl_certificate_key /etc/nginx/ssl/your_domain_name.key;


    # Make site accessible from http://localhost/
    server_name your_domain_name.com;
    root /var/www/html/public;

    index index.html index.htm index.php;

    charset utf-8;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        include fastcgi_params;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_index index.php;
        fastcgi_keep_conn on;
    }

    location ~ /\.ht {
        deny all;
    }

}
</code></pre>
<p>Use the docker <code>cp</code> command to copy your SSL configuration file to the Cachet containers nginx configuration directory:</p>
<pre><code>$ docker cp default.conf cachet:/etc/nginx/conf.d/default.conf
</code></pre>
<p>Create an SSL directory to store your SSL certificate &amp; key:</p>
<pre><code>$ docker exec -it cachet mkdir /etc/nginx/ssl
</code></pre>
<p>Have your SSL certificate and key file ready.</p>
<blockquote>
<p>Make sure your certificate file includes any required intermediate certificate(s). If not, concatenate it with your primary certificate into a single pem certificate file, using the following command:</p>
</blockquote>
<pre><code># cat primary_domain.crt intermediate.crt &gt;&gt; your_domain_name.crt
</code></pre>
<p>Copy your certificate (<code>your_domain_name.crt</code>) and key (<code>your_domain_name.key</code>) files to the <code>/etc/nginx/ssl</code> directory.</p>
<pre><code>docker cp your_domain_name.crt cachet:/etc/nginx/ssl/your_domain_name.crt
docker cp your_domain_name.key cachet:/etc/nginx/ssl/your_domain_name.key
</code></pre>
<p>Reload the nginx service:</p>
<pre><code>docker exec -it cachet service nginx reload
</code></pre>
<p>Now go to <code>http://your_domain_name.com</code> and you should be automatically redirected to <code>https://your_domain_name.com</code>.</p>
<p>That&apos;s it!</p>
<p>Checkout the <a href="https://docs.cachethq.io/docs">documentation</a> for further information on using Cachets features.</p>
<p><em>Spot any mistakes or have any suggestions? Please do let me know in the comment section.</em></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Monitoring pfSense logs using ELK (ElasticSearch 1.7, Logstash 1.5, Kibana 4.1) - PART 1]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://res.cloudinary.com/qunux/image/upload/v1566651964/pfsense_elk4_opt_ow2igg.png" alt loading="lazy"></p>
<p>This post is essentially an updated guide to my <a href="https://elijahpaul.co.uk/monitoring-pfsense-2-1-logs-using-elk-logstash-kibana-elasticsearch/">previous post</a> on monitoring pfSense logs using the <a href="https://www.elastic.co/">ELK</a> stack. Part 1 will cover the instillation and configuration of ELK and Part 2 will cover configuring Kibana 4 to visualize pfSense logs.</p>
<p><strong>So what&apos;s new?</strong></p>
<ul>
<li>Full guide to installing</li></ul>]]></description><link>https://elijahpaul.co.uk/updated-monitoring-pfsense-logs-using-elk-elasticsearch-logstash-kibana-part-1/</link><guid isPermaLink="false">5f68100167fa6f0001d1955a</guid><category><![CDATA[pfSense]]></category><category><![CDATA[Firewall]]></category><category><![CDATA[Logstash]]></category><category><![CDATA[Elasticsearch]]></category><category><![CDATA[Kibana]]></category><category><![CDATA[Logging]]></category><category><![CDATA[Log Analysis]]></category><category><![CDATA[ELK]]></category><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Sun, 11 Oct 2015 22:47:15 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://res.cloudinary.com/qunux/image/upload/v1566651964/pfsense_elk4_opt_ow2igg.png" alt loading="lazy"></p>
<p>This post is essentially an updated guide to my <a href="https://elijahpaul.co.uk/monitoring-pfsense-2-1-logs-using-elk-logstash-kibana-elasticsearch/">previous post</a> on monitoring pfSense logs using the <a href="https://www.elastic.co/">ELK</a> stack. Part 1 will cover the instillation and configuration of ELK and Part 2 will cover configuring Kibana 4 to visualize pfSense logs.</p>
<p><strong>So what&apos;s new?</strong></p>
<ul>
<li>Full guide to installing &amp; setting up ELK on Linux</li>
<li>Short tutorial on creating visualizations and dashboards using collected pfSense logs</li>
</ul>
<p>OK. So the goal is to use ELK to gather and visualize firewall logs from one (or more) pfSense servers.</p>
<p>This Logstash / Kibana setup has three main components:</p>
<ul>
<li><strong>Logstash:</strong> Processes the incoming logs sent from pfSense</li>
<li><strong>Elasticsearch:</strong> Stores all of the logs</li>
<li><strong>Kibana 4:</strong> Web interface for searching and visualizing logs (proxied through Nginx)</li>
</ul>
<p>(It is possible to manually install the <a href="https://github.com/elastic/logstash-forwarder">logstash-forwarder</a> on pfsense, however this tutorial will only cover forwarding logs via the default settings in pfSense.)<br>
<img src="https://res.cloudinary.com/qunux/image/upload/v1566652064/elk-infrastructure_opt_rqdtci.svg" alt loading="lazy"><br>
For this tutorial all three components (ElasticSearch, Logstash &amp; Kibana + Nginx) will be installed on a single server.</p>
<h2 id="prerequisites">Prerequisites:</h2>
<h3 id="1forcentos7enabletheepelrepository">1. For CentOS 7, enable the <a href="https://fedoraproject.org/wiki/EPEL">EPEL</a> repository</h3>
<pre><code># rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
</code></pre>
<h3 id="2makesurewgetisinstalled">2. Make sure <code>wget</code> is installed</h3>
<h4 id="spanstylecolortealcentos7span"><span style="color:teal;">CentOS 7</span></h4>
<pre><code># sudo yum -y install wget
</code></pre>
<h4 id="spanstylecolormaroonubuntu14xxspan"><span style="color:maroon;">Ubuntu 14.xx</span></h4>
<pre><code>$ sudo apt-get -y install wget
</code></pre>
<h3 id="3installthelatestjdkonyourserver">3. Install the latest JDK on your server:</h3>
<h4 id="spanstylecolortealcentos7span"><span style="color:teal;">CentOS 7</span></h4>
<p>Download Java SDK:</p>
<pre><code># wget --no-check-certificate -c --header &quot;Cookie: oraclelicense=accept-securebackup-cookie&quot; http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-x64.tar.gz
</code></pre>
<pre><code># tar -xzf jdk-8u60-linux-x64.tar.gz
</code></pre>
<pre><code># mv jdk1.8.0_60/ /usr/
</code></pre>
<p>Install Java:</p>
<pre><code># /usr/sbin/alternatives --install /usr/bin/java java /usr/jdk1.8.0_60/bin/java 2

# /usr/sbin/alternatives --config java
</code></pre>
<pre><code>There is 1 program that provides &apos;java&apos;.

  Selection    Command
-----------------------------------------------
*+ 1           /usr/jdk1.8.0_60/bin/java

Enter to keep the current selection[+], or type selection number:
</code></pre>
<p>Press ENTER</p>
<p>Verify Java Verison:</p>
<pre><code># java -version

java version &quot;1.8.0_60&quot;
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
</code></pre>
<p>Setup Environment Variables:</p>
<pre><code># export JAVA_HOME=/usr/jdk1.8.0_60/

# export JRE_HOME=/usr/jdk1.8.0_60/jre/
</code></pre>
<p>Set PATH variable:</p>
<pre><code># export PATH=$JAVA_HOME/bin:$PATH
</code></pre>
<p>To set it as a permanent, place the above three commands in the /etc/profile (All Users) or .bash_profile (Single User)</p>
<h4 id="spanstylecolormaroonubuntu14xxspan"><span style="color:maroon;">Ubuntu 14.xx</span></h4>
<p>Remove the OpenJDK from the system, if you have it already installed.</p>
<pre><code>$ sudo apt-get remove --purge openjdk*
</code></pre>
<p>Add repository.</p>
<pre><code>$ sudo add-apt-repository -y ppa:webupd8team/java
</code></pre>
<p>Run the <code>apt-get update</code> command to pull the packages information from the newly added repository.</p>
<pre><code>$ sudo apt-get update
</code></pre>
<p>Issue the following command to install Java jdk 1.8.</p>
<pre><code>$ sudo apt-get -y install oracle-java8-installer
</code></pre>
<p>While installing, you will be required to accept the Oracle binary licenses.</p>
<p>Verify java version</p>
<pre><code>$ java -version
 
java version &quot;1.8.0_60&quot;
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
</code></pre>
<p>Configure java Environment</p>
<pre><code>$ sudo apt-get install oracle-java8-set-default
</code></pre>
<h2 id="installelasticsearch">Install ElasticSearch</h2>
<p>Download and install the public GPG signing key:</p>
<h4 id="spanstylecolortealcentos7span"><span style="color:teal;">CentOS 7</span></h4>
<pre><code># rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
</code></pre>
<h4 id="spanstylecolormaroonubuntu14xxspan"><span style="color:maroon;">Ubuntu 14.xx</span></h4>
<pre><code>$ wget -qO - https://packages.elasticsearch.co/GPG-KEY-elasticsearch | sudo apt-key add -
</code></pre>
<p>Add and enable ElasticSearch repo:</p>
<h4 id="spanstylecolortealcentos7span"><span style="color:teal;">CentOS 7</span></h4>
<pre><code># cat &lt;&lt;EOF &gt;&gt; /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-1.7]
name=Elasticsearch repository for 1.7.x packages
baseurl=http://packages.elastic.co/elasticsearch/1.7/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
EOF
</code></pre>
<h4 id="spanstylecolormaroonubuntu14xxspan"><span style="color:maroon;">Ubuntu 14.xx</span></h4>
<pre><code>$ echo &quot;deb http://packages.elastic.co/elasticsearch/1.7/debian stable main&quot; | sudo tee -a /etc/apt/sources.list.d/elasticsearch-1.7.list
</code></pre>
<p>Install ElasticSearch</p>
<h4 id="spanstylecolortealcentos7span"><span style="color:teal;">CentOS 7</span></h4>
<pre><code># yum -y install elasticsearch
</code></pre>
<h4 id="spanstylecolormaroonubuntu14xxspan"><span style="color:maroon;">Ubuntu 14.xx</span></h4>
<pre><code>$ sudo apt-get update &amp;&amp; sudo apt-get install elasticsearch
</code></pre>
<p>Configure Elasticsearch to auto-start during system startup:</p>
<h4 id="spanstylecolortealcentos7span"><span style="color:teal;">CentOS 7</span></h4>
<pre><code># /bin/systemctl daemon-reload
# /bin/systemctl enable elasticsearch.service
# /bin/systemctl start elasticsearch.service
</code></pre>
<h4 id="spanstylecolormaroonubuntu14xxspan"><span style="color:maroon;">Ubuntu 14.xx</span></h4>
<pre><code>$ sudo update-rc.d elasticsearch defaults 95 10
</code></pre>
<p>Now wait, at least a minute to let the Elasticsearch get fully restarted, otherwise testing will fail. ElasticSearch should be now listen on 9200 for processing HTTP request, we can use CURL to get the response.</p>
<pre><code># curl -X GET http://localhost:9200
{
  &quot;status&quot; : 200,
  &quot;name&quot; : &quot;Alex&quot;,
  &quot;cluster_name&quot; : &quot;elasticsearch&quot;,
  &quot;version&quot; : {
    &quot;number&quot; : &quot;1.7.1&quot;,
    &quot;build_hash&quot; : &quot;b88f43fc40b0bcd7f173a1f9ee2e97816de80b19&quot;,
    &quot;build_timestamp&quot; : &quot;2015-07-29T09:54:16Z&quot;,
    &quot;build_snapshot&quot; : false,
    &quot;lucene_version&quot; : &quot;4.10.4&quot;
  },
  &quot;tagline&quot; : &quot;You Know, for Search&quot;
}
</code></pre>
<h2 id="installlogstash">Install Logstash</h2>
<h3 id="addthelogstashrepoenableinstall">Add the Logstash repo, enable &amp; install</h3>
<h4 id="spanstylecolortealcentos7span"><span style="color:teal;">CentOS 7</span></h4>
<pre><code># cat &lt;&lt;EOF &gt;&gt; /etc/yum.repos.d/logstash.repo
[logstash-1.5]
name=Logstash repository for 1.5.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.5/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1  
EOF 
</code></pre>
<pre><code># yum install logstash -y
</code></pre>
<h4 id="spanstylecolormaroonubuntu14xxspan"><span style="color:maroon;">Ubuntu 14.xx</span></h4>
<pre><code>$ echo &quot;deb http://packages.elasticsearch.org/logstash/1.5/debian stable main&quot; | sudo tee -a /etc/apt/sources.list
</code></pre>
<pre><code>$ sudo apt-get update &amp;&amp; sudo apt-get install logstash
</code></pre>
<h2 id="createsslcertificateoptional">Create SSL Certificate (Optional)*</h2>
<h5 id="youcanskipthisstepifyoudontintendtouseyourelkinstalltomonitorlogsforanythingotherthanpfsenseoranytcpudpforwardedlogs">*<em>You can skip this step if you don&apos;t intend to use your ELK install to monitor logs for anything other than pfSense (or any TCP/UDP forwarded logs).</em></h5>
<h5 id="pfsenseforwardsitslogsviaudpthereforetheresnorequirementtosetupalogstashforwarderenvironmenthavingsaidthatitsstillgoodpracticetosetitupsinceyoullmostlikelybeusingyourelkstackformorethancollectingandparsingonlypfsenselogs"><em>(pfSense forwards it&apos;s logs via UDP, therefore there&apos;s no requirement to set up a Logstash-Forwarder environment. Having said that, it&apos;s still good practice to set it up, since you&apos;ll most likely be using your ELK stack for more than collecting and parsing only pfSense logs.)</em></h5>
<p>Logstash-Forwarder (formerly LumberJack) utilizes an SSL certificate and key pair to verify the identity of your Logstash server.</p>
<p>You have two options when generating this SSL certificate.</p>
<p><strong>1.</strong> Hostname/FQDN (DNS) Setup<br><br>
<strong>2.</strong> IP Address Setup</p>
<p><strong>Option 1</strong><br><br>
If you have DNS setup within your private/internal network, add a DNS A record pointing to the private IP address of your ELK/Logstash server. Alternatively add a DNS A record with your DNS provider pointing to your ELK/Logstash servers public IP address. As long as each server you&apos;re gathering logs from can resolve the Logstash servers hostname/domain name, either is fine.</p>
<p>Alternatively you can edit the <code>etc/hosts</code> file of the servers you&apos;re collecting logs from by adding an IP address (Public or Private) and hostname entry pointing to your Logstash server (Private IP 192.168.0.77 in my case).</p>
<pre><code># nano /etc/hosts

192.168.0.77 elk.mydomain.com elk
</code></pre>
<p>Now to generate the SSL certificate and key pair. Go to OpenSSL directory.</p>
<h4 id="spanstylecolortealcentos7span"><span style="color:teal;">CentOS 7</span></h4>
<pre><code># cd /etc/pki/tls
</code></pre>
<h4 id="spanstylecolormaroonubuntu14xxspan"><span style="color:maroon;">Ubuntu 14.xx</span></h4>
<p>Use the following commands to create the directories that will store you certificate and private key.</p>
<pre><code>$ sudo mkdir -p /etc/pki/tls/certs
$ sudo mkdir /etc/pki/tls/private
</code></pre>
<p>Execute the following command to create a SSL certificate, replace &#x201C;elk&#x201D; with the hostname of your real logstash server.</p>
<pre><code># cd /etc/pki/tls
# openssl req -x509 -nodes -newkey rsa:2048 -days 3650 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt -subj /CN=elk
</code></pre>
<p>The generated logstash-forwarder.crt should be copied to all client servers who&apos;s logs you intend to send to your Logstash server.</p>
<h4 id="nbifyouvedecidedtogowiththeabovedescribedoption1pleaseignoreoption2belowandskipstraighttotheconfigurelogstashsection">N.B. If you&apos;ve decided to go with the above described <strong>Option 1</strong>, please ignore Option 2 below, and skip straight to the &apos;Configure Logstash&apos; section.</h4>
<p><strong>Option 2</strong><br><br>
If for some reason you don&apos;t have DNS setup and/or can&apos;t resolve the hostname of your Logstash server, you can add the IP address of your Logstash server to the subjectAltName (SAN) of the certificate we&apos;re about to generate.</p>
<p>Start by editing the OpenSSL configuration file:</p>
<pre><code>$ nano /etc/pki/tls/openssl.cnf
</code></pre>
<p>Find the section starting with <code>[ v3_ca ]</code> and add the following line, substituting the IP address for that of your own Logstash server.</p>
<pre><code>subjectAltName = IP: 192.168.0.77
</code></pre>
<p>Save &amp; exit</p>
<p>Execute the following command to create the SSL certificate and private key.</p>
<pre><code># cd /etc/pki/tls
# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
</code></pre>
<p>The generated logstash-forwarder.crt should be copied to all client servers  you intend to collect logs from.</p>
<h2 id="configurelogstash">Configure Logstash</h2>
<p>Logstash configuration files are JSON-Format files located in the <code>/etc/logstash/conf.d/</code> directory. A Logstash server configuration consists of three sections; <strong>input</strong>, <strong>filter</strong> and <strong>output</strong>, all of which can be placed in a single configuration file. However in practice is it&apos;s much more practical to place these sections into separate config files.</p>
<p>Create an input configuration:</p>
<pre><code># nano /etc/logstash/conf.d/01-inputs.conf
</code></pre>
<p>Paste the following:</p>
<pre><code>#logstash-forwarder [Not utilized by pfSense by default]
#input {
#  lumberjack {
#    port =&gt; 5000
#    type =&gt; &quot;logs&quot;
#    ssl_certificate =&gt; &quot;/etc/pki/tls/certs/logstash-forwarder.crt&quot;
#    ssl_key =&gt; &quot;/etc/pki/tls/private/logstash-forwarder.key&quot;
#  }
#}

#tcp syslog stream via 5140
input {
  tcp {
    type =&gt; &quot;syslog&quot;
    port =&gt; 5140
  }
}
#udp syslogs tream via 5140
input {
  udp {
    type =&gt; &quot;syslog&quot;
    port =&gt; 5140
  }
}
</code></pre>
<p>Create an syslog configuration:</p>
<pre><code># nano /etc/logstash/conf.d/10-syslog.conf
</code></pre>
<p>Paste the following:</p>
<pre><code>filter {
  if [type] == &quot;syslog&quot; {

    #change to pfSense ip address
    if [host] =~ /192\.168\.0\.2/ {
      mutate {
        add_tag =&gt; [&quot;PFSense&quot;, &quot;Ready&quot;]
      }
    }

    if &quot;Ready&quot; not in [tags] {
      mutate {
        add_tag =&gt; [ &quot;syslog&quot; ]
      }
    }
  }
}

filter {
  if [type] == &quot;syslog&quot; {
    mutate {
      remove_tag =&gt; &quot;Ready&quot;
    }
  }
}

filter {
  if &quot;syslog&quot; in [tags] {
    grok {
      match =&gt; { &quot;message&quot; =&gt; &quot;%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}&quot; }
      add_field =&gt; [ &quot;received_at&quot;, &quot;%{@timestamp}&quot; ]
      add_field =&gt; [ &quot;received_from&quot;, &quot;%{host}&quot; ]
    }
    syslog_pri { }
    date {
      match =&gt; [ &quot;syslog_timestamp&quot;, &quot;MMM  d HH:mm:ss&quot;, &quot;MMM  dd HH:mm:ss&quot; ]
      locale =&gt; &quot;en&quot;
    }

    if !(&quot;_grokparsefailure&quot; in [tags]) {
      mutate {
        replace =&gt; [ &quot;@source_host&quot;, &quot;%{syslog_hostname}&quot; ]
        replace =&gt; [ &quot;@message&quot;, &quot;%{syslog_message}&quot; ]
      }
    }

    mutate {
      remove_field =&gt; [ &quot;syslog_hostname&quot;, &quot;syslog_message&quot;, &quot;syslog_timestamp&quot; ]
    }
#    if &quot;_grokparsefailure&quot; in [tags] {
#      drop { }
#    }
  }
}
</code></pre>
<p>Create an outputs configuration:</p>
<pre><code># nano /etc/logstash/conf.d/30-outputs.conf
</code></pre>
<p>Paste the following:</p>
<pre><code>output {
elasticsearch { hosts =&gt; localhost index =&gt; &quot;logstash-%{+YYYY.MM.dd}&quot; }
stdout { codec =&gt; rubydebug }
}
</code></pre>
<p>Create your pfSense configuration:</p>
<pre><code># nano /etc/logstash/conf.d/11-pfsense.conf
</code></pre>
<p>Paste the following:</p>
<pre><code>filter {
  if &quot;PFSense&quot; in [tags] {
    grok {
      add_tag =&gt; [ &quot;firewall&quot; ]
      match =&gt; [ &quot;message&quot;, &quot;&lt;(?&lt;evtid&gt;.*)&gt;(?&lt;datetime&gt;(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?&lt;prog&gt;.*?): (?&lt;msg&gt;.*)&quot; ]
    }
    mutate {
      gsub =&gt; [&quot;datetime&quot;,&quot;  &quot;,&quot; &quot;]
    }
    date {
      match =&gt; [ &quot;datetime&quot;, &quot;MMM dd HH:mm:ss&quot; ]
      timezone =&gt; &quot;UTC&quot;
    }
    mutate {
      replace =&gt; [ &quot;message&quot;, &quot;%{msg}&quot; ]
    }
    mutate {
      remove_field =&gt; [ &quot;msg&quot;, &quot;datetime&quot; ]
    }
}
if [prog] =~ /^filterlog$/ {
    mutate {
      remove_field =&gt; [ &quot;msg&quot;, &quot;datetime&quot; ]
    }
    grok {
      patterns_dir =&gt; &quot;/etc/logstash/conf.d/patterns&quot;
      match =&gt; [ &quot;message&quot;, &quot;%{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}&quot;,
		 &quot;message&quot;, &quot;%{PFSENSE_LOG_DATA}%{PFSENSE_IPv4_SPECIFIC_DATA_ECN}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}&quot; ]
    }
    mutate {
      lowercase =&gt; [ &apos;proto&apos; ]
    }
    geoip {
      add_tag =&gt; [ &quot;GeoIP&quot; ]
      source =&gt; &quot;src_ip&quot;
      # Optional GeoIP database
      database =&gt; &quot;/etc/logstash/GeoLiteCity.dat&quot;
    }
  }
}
</code></pre>
<p>The above configuration uses a <a href="https://gist.github.com/elijahpaul/3d80030ac3e8138848b5">pattern file</a>. Create a patterns directory:</p>
<pre><code># mkdir /etc/logstash/conf.d/patterns
</code></pre>
<p>And download the following pattern file to it:</p>
<pre><code># cd /etc/logstash/conf.d/patterns
# wget https://gist.githubusercontent.com/elijahpaul/3d80030ac3e8138848b5/raw/abba6aa8398ba601389457284f7c34bbdbbef4c7/pfsense2-2.grok
</code></pre>
<p><strong>(Optional)</strong> Download and install the MaxMind GeoIP database:</p>
<pre><code>$ cd /etc/logstash
$ sudo curl -O &quot;http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz&quot;
$ sudo gunzip GeoLiteCity.dat.gz
</code></pre>
<p>Now restart the logstash service.</p>
<h4 id="spanstylecolortealcentos7span"><span style="color:teal;">CentOS 7</span></h4>
<pre><code># systemctl restart logstash.service
</code></pre>
<h4 id="spanstylecolormaroonubuntu14xxspan"><span style="color:maroon;">Ubuntu 14.xx</span></h4>
<pre><code>$ sudo service logstash restart
</code></pre>
<p>Logstash server logs are stored in the following file,</p>
<pre><code># cat /var/log/logstash/logstash.log
</code></pre>
<p>Logs retrieved from pfSense (once setup is complete) can be viewed via,</p>
<pre><code># tail -f /var/log/logstash/logstash.stdout
</code></pre>
<p>these will help you troubleshoot any issues you encounter.</p>
<h2 id="configuringpfsenseforremoteloggingtoelk">Configuring pfSense for remote logging to ELK</h2>
<p>Login to pfSense and check the dashboard to ensure you&apos;re running pfSense 2.2.x</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652065/pfsense_sys_info2_opt_khhy6t.png" alt loading="lazy"></p>
<p>Now go to the <strong>settings</strong> tab via <strong>Status</strong> &gt; <strong>System Logs</strong>. Check &apos;<strong>Send log messages to remote syslog server</strong>&apos;, enter your ELK servers IP address and custom port (port 5140 in this case), and check &apos;<strong>Firewall events</strong>&apos; (or &apos;<strong>Everything</strong>&apos; if you wish to send everything pfSense logs to ELK).</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652065/pfSense_remote_logging_options_opt_miokdi.png" alt loading="lazy"></p>
<p>That&apos;s it for pfSense!</p>
<h2 id="configurekibana4">Configure Kibana4</h2>
<p>Kibana 4 provides visualization of your pfSense logs. Use the following command to download it in terminal.</p>
<pre><code>wget https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gz
</code></pre>
<pre><code># tar -zxf kibana-4.1.2-linux-x64.tar.gz

# mv kibana-4.1.2-linux-x64 /opt/kibana4
</code></pre>
<p>Enable PID file for Kibana, this is required to create a systemd init file.</p>
<pre><code># sed -i &apos;s/#pid_file/pid_file/g&apos; /opt/kibana4/config/kibana.yml
</code></pre>
<p>Kibana can be started by running /opt/kibana4/bin/kibana, to run kibana as a server we will create a systemd file.</p>
<pre><code># nano /etc/systemd/system/kibana4.service

[Unit]
Description=Kibana 4 Web Interface
After=elasticsearch.service
After=logstash.service
[Service]
ExecStartPre=rm -rf /var/run/kibana.pid
ExecStart=/opt/kibana4/bin/kibana
ExecReload=kill -9 $(cat /var/run/kibana.pid) &amp;&amp; rm -rf /var/run/kibana.pid &amp;&amp; /opt/kibana4/bin/kibana
ExecStop=kill -9 $(cat /var/run/kibana.pid)
[Install]
WantedBy=multi-user.target
</code></pre>
<p>Start and enable kibana to start automatically at system startup.</p>
<pre><code># systemctl start kibana4.service

# systemctl enable kibana4.service
</code></pre>
<p>Check to see if Kibana is working properly by going to <a href="http://your-ELK-IP:5601/">http://your-ELK-IP:5601/</a> in your browser.</p>
<p>You should see the following page where you have to map Logstash index to use Kibana. From the <code>Time-field name</code> dropdown menu select <code>@timestamp</code>.</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652066/install-ELK-Kibana_wgqlpi.png" alt loading="lazy"></p>
<p><em>Spot any mistakes/errors? Or have any suggestions? Please make a comment below.</em></p>
<h3 id="part2configuringkibana4visualizationsanddashboardscomingsoon">Part 2: Configuring Kibana 4 visualizations and dashboards. (Coming soon)...</h3>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Analysing Exchange (2013) Message Tracking Logs using NXLog & ELK (ElasticSearch, Logstash, Kibana)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652511/Exchange-Message-Tracking-Logs_opt_nr4zm1.png" alt loading="lazy"></p>
<h1 id="introduction">Introduction</h1>
<p>Exchange 2013 maintains a detailed record of messages sent between the transport services within an Exchange organization via message tracking logs.</p>
<p>The default location for these logs is; <code>C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking</code>.</p>
<p>Exchange generates 3 main log files <em>(there is a 4th, but</em></p>]]></description><link>https://elijahpaul.co.uk/analysing-exchange-2013-message-tracking-logs-using-elk-elasticsearch-logstash-kibana/</link><guid isPermaLink="false">5f68100167fa6f0001d19559</guid><category><![CDATA[Logstash]]></category><category><![CDATA[Elasticsearch]]></category><category><![CDATA[Kibana]]></category><category><![CDATA[Logging]]></category><category><![CDATA[Exchange Server]]></category><category><![CDATA[Forensics]]></category><category><![CDATA[nxlog]]></category><category><![CDATA[Log Analysis]]></category><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Thu, 30 Oct 2014 04:48:02 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652511/Exchange-Message-Tracking-Logs_opt_nr4zm1.png" alt loading="lazy"></p>
<h1 id="introduction">Introduction</h1>
<p>Exchange 2013 maintains a detailed record of messages sent between the transport services within an Exchange organization via message tracking logs.</p>
<p>The default location for these logs is; <code>C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking</code>.</p>
<p>Exchange generates 3 main log files <em>(there is a 4th, but its use is negligible)</em>:</p>
<ul>
<li><strong>MSGTRKMS</strong>yyyymmddhh-nnnn.log; traffic events (sent messages)</li>
<li><strong>MSGTRKMD</strong>yyyymmddhh-nnnn.log; traffic events (received messgaes)</li>
<li><strong>MSGTRK</strong>yyyymmddhh-nnnn.log; Transport service events (message flow)</li>
</ul>
These files are in CSV (comma-separated value) format, making for easy parsing by Logstash. Your ELK server can be used to analyse message activity trends, such as which users are sending the most emails, to whom they&apos;re sending them and with what frequency. You can also determine the total volume of messages received and sent over a period, average size of message, total message volume over a period by user, and much more.
<p><em>More detailed information regarding Exchange Server 2013 message tracking logs can be found <a href="http://technet.microsoft.com/en-us/library/bb124375(v=exchg.150).aspx">here</a></em></p>
<h1 id="setup">Setup</h1>
<h3 id="enablemessagetrackinginexchange2013">Enable Message Tracking in Exchange 2013</h3>
<p>Message tracking in Exchange 2013 should be enabled by default. If it&apos;s not, you can use either the <strong>Exchange Admin Centre (EAC)</strong> or the <strong>Exchange Management Shell (EMS)</strong> to enable/configure it.</p>
<p><strong>EAC</strong></p>
<p>In the EAC, navigate to <strong>Servers</strong> &gt; <strong>Servers</strong>.</p>
<p>Select the Mailbox server you want to configure, and then click <strong>Edit</strong></p>
<p>On the server properties page, click <strong>Transport Logs</strong>.</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652548/enable_msg_tracking_EXH2013_rkomws.png" alt loading="lazy"></p>
<p>Make sure the <strong>Enable message tracking log</strong> checkbox is checked.</p>
<p>Click <strong>Save</strong>.</p>
<p><strong>EMS</strong></p>
<p>Start the EMS and run the following command:</p>
<pre><code>Set-TransportService &lt;ServerIdentity&gt; -MessageTrackingLogEnabled &lt;$true | $false&gt; -MessageTrackingLogMaxAge &lt;dd.hh:mm:ss&gt; -MessageTrackingLogMaxDirectorySize &lt;Size&gt; -MessageTrackingLogMaxFileSize &lt;Size&gt; -MessageTrackingLogPath &lt;LocalFilePath&gt; -MessageTrackingLogSubjectLoggingEnabled &lt;$true|$false&gt;
</code></pre>
<p>e.g.</p>
<pre><code>Set-TransportService CASMBX01 -MessageTrackingLogPath &quot;C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking&quot; -MessageTrackingLogMaxFileSize 20MB -MessageTrackingLogMaxDirectorySize 1.5GB -MessageTrackingLogMaxAge 45.00:00:00
</code></pre>
<p>Sets the location of the message tracking log files to <code>C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking</code>. Note that if the folder doesn&apos;t exist, it will be created for you.</p>
<p>Sets the maximum size of a message tracking log file to 20 MB.</p>
<p>Sets the maximum size of the message tracking log directory to 1.5 GB.</p>
<p>Sets the maximum age of a message tracking log file to 45 days.</p>
<h3 id="configurelogstashtoparseexchange2013messagetrackinglogs">Configure Logstash to parse Exchange 2013 message tracking logs</h3>
<p>On your ELK server, add the following input &amp; filter to your <code>logstash.conf</code> file in the <code>/etc/logstash/conf.d/</code> configuration directory, or in separate config files (depending on your setup) e.g. <code>01-inputs.conf</code> &amp; <code>12-exchange_msg_trk.conf</code>.</p>
<p><em><strong>input</strong></em></p>
<pre><code>#udp syslogs stream via 5141
input {
  udp {
    type =&gt; &quot;Exchange&quot;
    port =&gt; 5141
  }
}
</code></pre>
<p><em><strong>filter</strong></em></p>
<pre><code>filter {
  if [type] == &quot;Exchange&quot; {
	csv {
            add_tag =&gt; [ &apos;exh_msg_trk&apos; ]
            columns =&gt; [&apos;logdate&apos;, &apos;client_ip&apos;, &apos;client_hostname&apos;,  &apos;server_ip&apos;, &apos;server_hostname&apos;, &apos;source_context&apos;, &apos;connector_id&apos;, &apos;source&apos;, &apos;event_id&apos;, &apos;internal_message_id&apos;, &apos;message_id&apos;, &apos;network_message_id&apos;, &apos;recipient_address&apos;, &apos;recipient_status&apos;, &apos;total_bytes&apos;, &apos;recipient_count&apos;, &apos;related_recipient_address&apos;, &apos;reference&apos;, &apos;message_subject&apos;, &apos;sender_address&apos;, &apos;return_path&apos;, &apos;message_info&apos;, &apos;directionality&apos;, &apos;tenant_id&apos;, &apos;original_client_ip&apos;, &apos;original_server_ip&apos;, &apos;custom_data&apos;]
	    remove_field =&gt; [ &quot;logdate&quot; ]
	    }
	grok {      
        match =&gt; [ &quot;message&quot;, &quot;%{TIMESTAMP_ISO8601:timestamp}&quot; ]
	    }
	mutate {
    	convert =&gt; [ &quot;total_bytes&quot;, &quot;integer&quot; ]
	    convert =&gt; [ &quot;recipient_count&quot;, &quot;integer&quot; ]
	    split =&gt; [&quot;recipient_address&quot;, &quot;;&quot;]
	    split =&gt; [ &quot;source_context&quot;, &quot;;&quot; ]
	    split =&gt; [ &quot;custom_data&quot;, &quot;;&quot; ]
  	    }
	date {
        match =&gt; [ &quot;timestamp&quot;, &quot;ISO8601&quot; ]
        timezone =&gt; &quot;Europe/London&quot;
	    remove_field =&gt; [ &quot;timestamp&quot; ]
	    }
	if &quot;_grokparsefailure&quot; in [tags] {
	      drop { }
	    }
	}
}
</code></pre>
<p><em><strong>output</strong></em></p>
<pre><code>output {
  elasticsearch { host =&gt; localhost }
  stdout { codec =&gt; rubydebug }
}
</code></pre>
<p>Run the following command to test the validity of your configuration:</p>
<pre><code># /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/logstash.conf
</code></pre>
<p>Once you get a &apos;<code>Configuration OK</code>&apos; message. Restart the Logstash service for the configuration to take effect.</p>
<h3 id="installconfigurenxlogonyourexchangeserver">Install &amp; Configure NXLog on your Exchange Server</h3>
<p><a href="http://nxlog.org/products/nxlog-community-edition">NXLog</a> is a popular open source log management tool for collecting and forwarding logs from Windows (as well as GNU/Linux) platforms. It also happens to be dead simple to install and configure.</p>
<ol>
<li>Logon to your Exchange server as Administrator. Next, download and run the latest version of the <a href="http://sourceforge.net/projects/nxlog-ce/files/" target="_blank">NXLog installer</a>. Follow through the on screen prompts.</li><br>
<li>Open the configuration file <code>C:\Program Files (x86)\nxlog\conf\nxlog.conf</code>, (or on 64bit installs <code>C:\Program Files\nxlog\conf\nxlog.conf</code>).</li><br>
<li>Edit your <code>nxlog.conf</code> to look like the following:
<pre><code>## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html

## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.

#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog

Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log

&lt;Extension syslog&gt;
    Module      xm_syslog
&lt;/Extension&gt;

define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking

&lt;Input in_exchange&gt;
   Module     im_file
   File       &apos;%BASEDIR%\MSGTRK????????*-*.LOG&apos; # Exports all logs in Directory
   SavePos    TRUE
   Exec       if $raw_event =~ /HealthMailbox/ drop();
   Exec       if $raw_event =~ /^#/ drop();
&lt;/Input&gt;

&lt;Output out_exchange&gt;
    Module    om_udp
    Host      192.168.0.2 # Replace with your Logstash hostname/IP
    Port      5141        # Replace with your desired port
    Exec      $SyslogFacilityValue = 2;
    Exec      $SourceName = &apos;exchange_msgtrk_log&apos;;
    Exec      to_syslog_bsd();
&lt;/Output&gt;

&lt;Route exchange&gt;
    Path      in_exchange =&gt; out_exchange
&lt;/Route&gt;
</code></pre>
<p><strong>N.B.</strong> Replace the <code>Host</code> IP and <code>Port</code> value with those you configured on your Logstash server earlier.<br><br></p>
<p>Exchange uses the health mailboxes to establish that email connectivity exists to the various databases in the system by sending artificial messages to and from the mailboxes every five minutes or so. For my use case, I had no need to export the message tracking logs associated with these health messages. The line <code>Exec       if $raw_event =~ /HealthMailbox/ drop();</code> drops these log entries from those being exported to Logstash. You can of course comment out or remove this line if you require these entries for your own analysis.</p>
</li><br>
<li>In Powershell or a Command Prompt run <code>net start nxlog</code> to start NXLog.</li>
</ol>
<h3 id="kibana">Kibana</h3>
<p>Once your logs are successfully flowing to your logstash server, you can use queries and filters in Kibana to create panels like these:</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566652626/msg_vol_bgwfbn.png" alt loading="lazy"><br>
<i style="font-size:13px">Message Volumes</i><br><br>
<img src="https://res.cloudinary.com/qunux/image/upload/v1566652625/top_senders_msg_breakdown_tjlppt.png" alt loading="lazy"><br>
<i style="font-size:13px">Top Senders &amp; Message Percentage Breakdown</i></p>
<p>Link to Exchange message tracking Dashboard; Gist: <a href="https://gist.github.com/elijahpaul/4b9cd98715c0ba2a75de/raw/9324f7c637e486fb53c8e66e1552595b73b5e636/exchange_msg_trak_dash_v1.json">4b9cd98715c0ba2a75de</a></p>
<p>If using my dashboard;</p>
<p><strong>1.</strong> In the Gist, replace my (<em>Outbound</em>) Send Connector name (&apos;<code>Outbound Internet Mail</code>&apos;) with that of your own (or at least with that of the Send Connector you wish to analyse).</p>
<p>You can find the name of your Send Connector via the EAC; Click <strong>mail flow</strong> &gt; <strong>send connectors</strong> and you&apos;ll see your Send Connectors name listed in the table. Alternatively, you can run <code>Get-SendConnector</code> in the EMS. Your send connector name(s) will be listed under the <code>Identity</code> column.</p>
<p><strong>2.</strong> You may have to adjust the default pinned queries depending on how you differentiate between internal and external communication in your Exchange organization setup.</p>
<p><em><strong>Sources</strong></em>:</p>
<p>Configure Message Tracking: <a href="http://technet.microsoft.com/en-us/library/aa997984(v=exchg.150).aspx">http://technet.microsoft.com/en-us/library/aa997984(v=exchg.150).aspx</a></p>
<p><em>Spot any mistakes? Or have any suggestions? Please make a comment below.</em></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Updated: Monitoring pfSense (2.1 & 2.2) logs using ELK (ElasticSearch, Logstash, Kibana)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://res.cloudinary.com/qunux/image/upload/v1566651153/Kibana_3_pfSense_Firewall_dashboard_2_wspimr.png" alt loading="lazy"></p>
<h2 style="color:red"><em>Scroll to the bottom for the update on applying this tutorial to the new pfSense 2.2 log format</em></h2>
<p><strong>What is pfSense?</strong></p>
<p>Only the best open source, software based firewall there is (I&apos;m biased). I use it a lot, especially in virtualized environments. <a href="https://www.pfsense.org/">https://www.pfsense.org/</a></p>
<p><strong>What</strong></p>]]></description><link>https://elijahpaul.co.uk/monitoring-pfsense-2-1-logs-using-elk-logstash-kibana-elasticsearch/</link><guid isPermaLink="false">5f68100167fa6f0001d19558</guid><category><![CDATA[pfSense]]></category><category><![CDATA[Firewall]]></category><category><![CDATA[Logstash]]></category><category><![CDATA[Elasticsearch]]></category><category><![CDATA[Kibana]]></category><category><![CDATA[Logging]]></category><category><![CDATA[Splunk]]></category><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Sun, 12 Oct 2014 09:20:00 GMT</pubDate><media:content url="https://elijahpaul.co.uk/content/images/2014/10/Kibana_3_pfSense_Firewall_dashboard_small.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://elijahpaul.co.uk/content/images/2014/10/Kibana_3_pfSense_Firewall_dashboard_small.jpg" alt="Updated: Monitoring pfSense (2.1 &amp; 2.2) logs using ELK (ElasticSearch, Logstash, Kibana)"><p><img src="https://res.cloudinary.com/qunux/image/upload/v1566651153/Kibana_3_pfSense_Firewall_dashboard_2_wspimr.png" alt="Updated: Monitoring pfSense (2.1 &amp; 2.2) logs using ELK (ElasticSearch, Logstash, Kibana)" loading="lazy"></p>
<h2 style="color:red"><em>Scroll to the bottom for the update on applying this tutorial to the new pfSense 2.2 log format</em></h2>
<p><strong>What is pfSense?</strong></p>
<p>Only the best open source, software based firewall there is (I&apos;m biased). I use it a lot, especially in virtualized environments. <a href="https://www.pfsense.org/">https://www.pfsense.org/</a></p>
<p><strong>What is ELK?</strong></p>
<p>ELK (ElasticSearch, Logstash, Kibana) is a pretty cool open source stack that enables you to collect, store, search and visualize logs from almost any system that outputs logs, all to a centralised location/server.<br>
Check out the <a href="http://www.elasticsearch.org/">elasticsearch website</a> for more detailed info.</p>
<p><strong>Installing ELK on Linux</strong></p>
<p>There are quite a few tutorials out there on installing ELK on various Linux distributions. Here&apos;s a list of the few I found very helpful:</p>
<p><a href="http://www.ragingcomputer.com/2014/02/monitoring-pfsense-with-logstash-elasticsearch-kibana">http://www.ragingcomputer.com/2014/02/monitoring-pfsense-with-logstash-elasticsearch-kibana</a></p>
<p><a href="https://blog.devita.co/2014/09/04/monitoring-pfsense-firewall-logs-with-elk-logstash-kibana-elasticsearch/">https://blog.devita.co/2014/09/04/monitoring-pfsense-firewall-logs-with-elk-logstash-kibana-elasticsearch/</a> &lt;= I used this one.</p>
<p><a href="https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-6">https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-6</a></p>
<p><a href="https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04">https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04</a></p>
<p><strong>IP Setup in this tutorial</strong></p>
<p>pfSense Server IP (WAN): <strong>172.16.0.1</strong><br>
pfSense Server IP (LAN): <strong>192.168.0.2</strong><br>
ELK server IP: <strong>192.168.0.77</strong></p>
<p><em>Substitute the above IPs for the appropriate ones in your own setup.</em></p>
<p><strong>Configuring pfSense for remote logging to ELK</strong></p>
<p>Login to pfSense and check the dashboard to ensure you&apos;re running <strong>pfSense 2.1</strong></p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566651314/pfSense_sys_info_-_itai2t.png" alt="Updated: Monitoring pfSense (2.1 &amp; 2.2) logs using ELK (ElasticSearch, Logstash, Kibana)" loading="lazy"></p>
<p>Now go to the settings tab via Status &gt; System Logs. Check &apos;Send log messages to remote syslog server&apos;, enter your ELK servers IP address (and port if you&apos;ve set it to something other than the default port 514 in the Logstash config), and check &apos;Firewall events&apos; (or &apos;Everything&apos; if you wish to send everything pfSense logs to ELK).</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566651368/pfSense_remote_logging_options_aaup7p.jpg" alt="Updated: Monitoring pfSense (2.1 &amp; 2.2) logs using ELK (ElasticSearch, Logstash, Kibana)" loading="lazy"></p>
<p><strong>Configuring Logstash to parse pfSense logs</strong></p>
<p>Now back on your ELK server, add the following filter to your <code>logstash.conf</code> file in the <code>/etc/logstash/conf.d/</code> configuration directory, or in a separate pfSense config file (depending on your setup) e.g. <code>11-pfsense.conf</code>.</p>
<pre><code>filter {
      #change to pfSense ip address
      if [host] =~ /172\.16\.0\.1/ {
          grok {
                  add_tag =&gt; [ &quot;firewall&quot; ]
                  match =&gt; [ &quot;message&quot;, &quot;&lt;(?&lt;evtid&gt;.*)&gt;(?&lt;datetime&gt;(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?&lt;prog&gt;.*?): (?&lt;msg&gt;.*)&quot; ]
              }
              mutate {
                  gsub =&gt; [&quot;datetime&quot;,&quot;  &quot;,&quot; &quot;]
              }
              date {
                  match =&gt; [ &quot;datetime&quot;, &quot;MMM dd HH:mm:ss&quot; ]
              timezone =&gt; &quot;Europe/London&quot;
              }
              mutate {
                  replace =&gt; [ &quot;message&quot;, &quot;%{msg}&quot; ]
              }
              mutate {
                  remove_field =&gt; [ &quot;msg&quot;, &quot;datetime&quot; ]
              }
          }
          if [prog] =~ /^pf$/ {
              mutate {
                  add_tag =&gt; [ &quot;packetfilter&quot; ]
              }
              multiline {
                  pattern =&gt; &quot;^\s+|^\t\s+&quot;
                  what =&gt; &quot;previous&quot;
              }
              mutate {
                  remove_field =&gt; [ &quot;msg&quot;, &quot;datetime&quot; ]
                  remove_tag =&gt; [ &quot;multiline&quot; ]
              }
              grok {
                  match =&gt; [ &quot;message&quot;, &quot;rule (?&lt;rule&gt;.*)\(.*\): (?&lt;action&gt;pass|block) (?&lt;direction&gt;in|out).* on (?&lt;iface&gt;.*): .* proto (?&lt;proto&gt;TCP|UDP|IGMP|ICMP) .*\n\s*(?&lt;src_ip&gt;(\d+\.\d+\.\d+\.\d+))\.?(?&lt;src_port&gt;(\d*)) [&lt;|&gt;] (?&lt;dest_ip&gt;(\d+\.\d+\.\d+\.\d+))\.?(?&lt;dest_port&gt;(\d*)):&quot; ]
              }
              if [prog] =~ /^dhcpd$/ {
              if [message] =~ /^DHCPACK|^DHCPREQUEST|^DHCPOFFER/ {
                  grok {
                      match =&gt; [ &quot;message&quot;, &quot;(?&lt;action&gt;.*) (on|for|to) (?&lt;src_ip&gt;[0-2]?[0-9]?[0-9]\.[0-2]?[0-9]?[0-9]\.[0-2]?[0-9]?[0-9]\.[0-2]?[0-9]?[0-9]) .*(?&lt;mac_address&gt;[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]).* via (?&lt;iface&gt;.*)&quot; ]
                  }
              }
              if [message] =~ /^DHCPDISCOVER/ {
                  grok {
                      match =&gt; [ &quot;message&quot;, &quot;(?&lt;action&gt;.*) from (?&lt;mac_address&gt;[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F]).* via (?&lt;iface&gt;.*)&quot; ]
                  }
              }
              if [message] =~ /^DHCPINFORM/ {
                  grok {
                      match =&gt; [ &quot;message&quot;, &quot;(?&lt;action&gt;.*) from (?&lt;src_ip&gt;.*).* via (?&lt;iface&gt;.*)&quot; ]
                  }
              }
             } 
          geoip {
            add_tag =&gt; [ &quot;GeoIP&quot; ]
            source =&gt; &quot;src_ip&quot;
          }              
      }
}    
</code></pre>
<p><em><strong>Note</strong></em> :  The above config should be placed either in your <code>logstash.conf</code> file, or in it&apos;s own separate pfSense config file, e.g. <code>11-pfsense.conf</code>. Don&apos;t forget to substitute in your own pfSense IP.</p>
<p>I followed Mike DeVita&apos;s <a href="https://blog.devita.co/2014/09/04/monitoring-pfsense-firewall-logs-with-elk-logstash-kibana-elasticsearch/">guide</a> and so have my pfsense config file separate. Also, my pfSense host IP is tagged <code>&quot;PFSense&quot;</code> in my <code>10-syslog.conf</code> file:</p>
<pre>
if [host] =~ /172\.16\.0\.1/ {
      mutate {
        add_tag =&gt; [&quot;PFSense&quot;, &quot;Ready&quot;]
      }
    }
</pre>
<p>Again. See Mike DeVita&apos;s <a href="https://blog.devita.co/2014/09/04/monitoring-pfsense-firewall-logs-with-elk-logstash-kibana-elasticsearch/">guide</a> for more details on this setup. The dashboard I built in Kibana is also based on this setup.</p>
<p><strong>The GeoIP Filter</strong></p>
<p>Including the GeoIP filter means you can filter pfSense&apos;s logged IPs by country. The default Logstash installation includes a GeoIP database based on data from the Maxmind database (the <code>database =&gt;</code> option allows you to include a path to an alternate GeoIP DB that Logstash should use instead, e.g. <a href="http://dev.maxmind.com/geoip/geoip2/geolite2/" target="blank_">a downloaded DB</a>). This means you can build cool panels in Kibana (like the one below) visualising which countries your pfSense firewall is filtering by count or percentage.</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566651276/pfsense_blocked_by_country__bbxfnc.png" alt="Updated: Monitoring pfSense (2.1 &amp; 2.2) logs using ELK (ElasticSearch, Logstash, Kibana)" loading="lazy"></p>
<p>The built-in GeoIP filter includes a bunch of GeoIP fields all of which are included by default. You may not require the full set, in which case you can select only those you intend to use via the <code>fields</code> array option. (<a href="http://logstash.net/docs/1.4.2/filters/geoip#fields" target="_blank">http://logstash.net/docs/1.4.2/filters/geoip#fields</a>)</p>
<p>I&apos;ve left mine set to include all fields in case I wish to filter the logs by these additional fields at a later date.</p>
<p>505Forensics has a great (detailed) post on configuring GeoIP location in Logstash: <a href="http://www.505forensics.com/who-have-your-logs-been-talking-to/">http://www.505forensics.com/who-have-your-logs-been-talking-to/</a></p>
<p><strong>Restart Logstash</strong></p>
<p>For the config file(s) to take effect:</p>
<pre>
# service logastash restart
</pre>
<p><strong>Check Logstash pfSense config is working</strong></p>
<p>Running the following command on the ELK server terminal allows you to view the formatted incoming logs from pfSense live:</p>
<pre># tail -f /var/log/logstash/logstash.stdout</pre>
<p>You should see neatly formatted logs similar to the one below pop up every few seconds (depending on your firewalls volume of traffic):</p>
<pre><code>{
       &quot;message&quot; =&gt; &quot;00:00:04.940687 rule 3/0(match): block in on em0: (tos 0x0, ttl 252, id 1, offset 0, flags [DF], proto ICMP (1), length 32)\n    92.972.185.1 &gt; 172.16.0.1: ICMP echo request, id 31127, seq 1, length 12&quot;,
      &quot;@version&quot; =&gt; &quot;1&quot;,
    &quot;@timestamp&quot; =&gt; &quot;2014-10-12T09:05:19.000Z&quot;,
          &quot;type&quot; =&gt; &quot;syslog&quot;,
          &quot;host&quot; =&gt; &quot;172.16.0.1&quot;,
          &quot;tags&quot; =&gt; [
        [0] &quot;PFSense&quot;,
        [1] &quot;firewall&quot;,
        [2] &quot;packetfilter&quot;,
        [3] &quot;GeoIP&quot;
    ],
         &quot;evtid&quot; =&gt; &quot;134&quot;,
          &quot;prog&quot; =&gt; &quot;pf&quot;,
          &quot;rule&quot; =&gt; &quot;3/0&quot;,
        &quot;action&quot; =&gt; &quot;block&quot;,
     &quot;direction&quot; =&gt; &quot;in&quot;,
         &quot;iface&quot; =&gt; &quot;em0&quot;,
         &quot;proto&quot; =&gt; &quot;ICMP&quot;,
        &quot;src_ip&quot; =&gt; &quot;92.972.185.1&quot;,
       &quot;dest_ip&quot; =&gt; &quot;172.16.0.1&quot;,
         &quot;geoip&quot; =&gt; {
                    &quot;ip&quot; =&gt; &quot;92.972.185.1&quot;,
         &quot;country_code2&quot; =&gt; &quot;DE&quot;,
         &quot;country_code3&quot; =&gt; &quot;DEU&quot;,
          &quot;country_name&quot; =&gt; &quot;Germany&quot;,
        &quot;continent_code&quot; =&gt; &quot;EU&quot;,
              &quot;latitude&quot; =&gt; 51.0,
             &quot;longitude&quot; =&gt; 9.0,
              &quot;timezone&quot; =&gt; &quot;Europe/Berlin&quot;,
              &quot;location&quot; =&gt; [
            [0] 9.0,
            [1] 51.0
        ]
    }
}
</code></pre>
<p><strong>The Kibana Dashboard</strong></p>
<p><a href="https://gist.githubusercontent.com/elijahpaul/a1b0296ff442a95e9046/">My dashboard</a> is a version of <a href="https://gist.githubusercontent.com/mikedevita/ff927c5a8fdb138bca35/raw/f0f7d41f62016beb4a11d38e381360e05dbf52b1/pfsense-firewall.json">Mike DeVita&apos;s pfSense dashboard</a> altered to include GeoIP visualisations of pfSense&apos;s logs. I also added a couple of extra panels to visualize which destination port numbers are being blocked the most...</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566651450/kibana_blocked_by_port_p6sgke.png" alt="Updated: Monitoring pfSense (2.1 &amp; 2.2) logs using ELK (ElasticSearch, Logstash, Kibana)" loading="lazy"></p>
<p>And a map visualizing which countries are most blocked by IP.</p>
<p><img src="https://res.cloudinary.com/qunux/image/upload/v1566651450/kibana_countries_blocked_by_ip_tbd5iz.png" alt="Updated: Monitoring pfSense (2.1 &amp; 2.2) logs using ELK (ElasticSearch, Logstash, Kibana)" loading="lazy"></p>
<p>Link to Dashboard Gist: <a href="https://gist.githubusercontent.com/elijahpaul/a1b0296ff442a95e9046/raw/495ed961ba51614bbd2ef3c00216f4fce9839da6/pfsense_kibana_dash_v1.json">a1b0296ff442a95e9046</a></p>
<p><strong>Issues</strong></p>
<p>Initially I did have an issue where the logs pfSense sent to ELK were timestamped with the wrong timezone. This was occurring despite the same timezone (BST) being configured on both servers. I managed to resolve this by adding the <code>timezone</code> option to the date section in my <code>11-pfsense.conf</code> config file in <code>/etc/logstash/conf.d/</code>.</p>
<p><strong>Update: New filter for pfSense 2.2</strong> <em>(N.B. This filter currently doesn&apos;t parse ICMPv6 logs)</em></p>
<p>Quite a few people requested an updated filter to manage the new log format in <a href="https://blog.pfsense.org/?p=1546">pfSense 2.2</a>. The new log format is comma-separated CSV, which  is much easier to parse, however packet filter logs do vary in length depending on the IP version and protocol being logged.</p>
<p>The filter below, together with <strong><a href="https://gist.github.com/elijahpaul/f5f32d4e914dcb7fedd2">THIS</a></strong> custom pattern file (courtesy of <a href="https://plus.google.com/115259709466049597336">J. Pisano</a>; mega thanks) parses logs for both IPv4 &amp; IPv6 and TCP, UDP and ICMP protocols (currently excluding ICMPv6).</p>
<p>Place the <code>pfsense2-2.grok</code> file in your patterns folder and make sure to refer to your patterns directory location in the filter via the <code>patterns_dir</code> setting.</p>
<p>Any suggestions for corrections and/or improvement are welcome. :)</p>
<pre><code>filter {
  if &quot;PFSense&quot; in [tags] {
    grok {
      add_tag =&gt; [ &quot;firewall&quot; ]
      match =&gt; [ &quot;message&quot;, &quot;&lt;(?&lt;evtid&gt;.*)&gt;(?&lt;datetime&gt;(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?&lt;prog&gt;.*?): (?&lt;msg&gt;.*)&quot; ]
    }
    mutate {
      gsub =&gt; [&quot;datetime&quot;,&quot;  &quot;,&quot; &quot;]
    }
    date {
      match =&gt; [ &quot;datetime&quot;, &quot;MMM dd HH:mm:ss&quot; ]
    }
    mutate {
      replace =&gt; [ &quot;message&quot;, &quot;%{msg}&quot; ]
    }
    mutate {
      remove_field =&gt; [ &quot;msg&quot;, &quot;datetime&quot; ]
    }
}
if [prog] =~ /^filterlog$/ {
    mutate {
      remove_field =&gt; [ &quot;msg&quot;, &quot;datetime&quot; ]      
    }
    grok {
      patterns_dir =&gt; &quot;./patterns&quot;
      match =&gt; [ &quot;message&quot;, &quot;%{LOG_DATA}%{IP_SPECIFIC_DATA}%{IP_DATA}%{PROTOCOL_DATA}&quot; ]
    }
    mutate {
      lowercase =&gt; [ &apos;proto&apos; ]
    }
    geoip {
      add_tag =&gt; [ &quot;GeoIP&quot; ]
      source =&gt; &quot;src_ip&quot;
    }
  }
}
</code></pre>
<p><strong>Credits</strong></p>
<p>Main credit goes to <a href="https://blog.devita.co/2014/09/04/monitoring-pfsense-firewall-logs-with-elk-logstash-kibana-elasticsearch/">Mike DeVita&apos;s guide for pfSense 2.2</a>, off which I based my config and dashboard.</p>
<p><strong>Resources</strong></p>
<p><a href="http://logstash.net/docs/1.4.2/">Logstash Docs v1.4.2</a></p>
<p><a href="https://github.com/elasticsearch/logstash/blob/v1.4.2/patterns/grok-patterns">Logstash Grok Patterns</a></p>
<p><a href="http://grokdebug.herokuapp.com/">Grok Debugger</a></p>
<p><a href="http://grokconstructor.appspot.com/do/match">Another Grok Debugger</a></p>
<p><a href="https://doc.pfsense.org/index.php/Filter_Log_Format_for_pfSense_2.2">Filter Log Format for pfSense 2.2</a></p>
<p><a href="https://github.com/pfsense/pfsense/blob/master/etc/protocols">pfSense Protocols</a></p>
<p><a href="https://forum.pfsense.org/index.php?topic=87846.0">Logstash filter for Nagios</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Free (minor) update to smashing payment icon set]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Originally created by <a href="http://www.thewebdesignblog.co.uk/">Phil Matthews</a> almost 5 years ago, <a href="http://www.smashingmagazine.com/2010/10/21/free-png-credit-card-debit-card-and-payment-icons-set-18-icons/">this set</a> of payment icons is still my favourite:</p>
<p>I needed a &apos;bitcoin&apos; payment icon to match this set for a project I&apos;m working on, so decided to create one.</p>
<p style="text-align:center">![bitcoin payment icon](/content/images/2014/Jul/</p>]]></description><link>https://elijahpaul.co.uk/free-update-to-smashing-payment-icon-set/</link><guid isPermaLink="false">5f68100167fa6f0001d19555</guid><category><![CDATA[Freebies]]></category><category><![CDATA[Icons]]></category><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Thu, 24 Jul 2014 07:04:02 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Originally created by <a href="http://www.thewebdesignblog.co.uk/">Phil Matthews</a> almost 5 years ago, <a href="http://www.smashingmagazine.com/2010/10/21/free-png-credit-card-debit-card-and-payment-icons-set-18-icons/">this set</a> of payment icons is still my favourite:</p>
<p>I needed a &apos;bitcoin&apos; payment icon to match this set for a project I&apos;m working on, so decided to create one.</p>
<p style="text-align:center">![bitcoin payment icon](/content/images/2014/Jul/bitcoin-curved-128px.png)</p><p>
Once I created the template, I thought I may as well update a few of the outdated icons from the original pack that I also needed.
<ul>
<li>Visa (2014 logo)</li>
<li>PayPal (2014 logo)</li>
<li>Skrill (formerly moneybookers)</li>
<li>Sage (inverted)</li>
</ul>
</p><p>Like the original set, I&apos;ve included 32px, 64px, 128px, PNG curved &amp; straight in the zip download below.</p>
<p style="text-align:center">![](/content/images/2014/Jul/payicons-preview2.png)<br>
[download these updates free](http://bit.ly/payment-icon-set-updates) (zip, 148KB)<br><a href="http://bit.ly/payment-icon-set-updates" target="_blank"><img width="50px" src="https://elijahpaul.co.uk/content/images/2014/Jul/iconmonstr-download-9-icon.svg"></a>
</p><p>
</p><p>Source: <a href="http://www.smashingmagazine.com/2010/10/21/free-png-credit-card-debit-card-and-payment-icons-set-18-icons/">http://www.smashingmagazine.com/2010/10/21/free-png-credit-card-debit-card-and-payment-icons-set-18-icons/</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Install Script for Transmission (v2.84) SeedBox on CentOS 6.5]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p style="text-align:center">
![torrent file type pic](/content/images/2014/Jul/torrent300x300.png)
</p>
This is an updated version of the script posted at [Transmission SeedBox](http://transmissionseedbox.blogspot.de/2012/01/creating-seedbox-in-centos-6.html) for installing transmission on a CentOS (6.5) server. (Please see the orignial post for more details).
<p><strong>Three lines!</strong>:</p>
<pre># wget</pre>]]></description><link>https://elijahpaul.co.uk/script-install-for-transmission-2-83-seedbox-on-centos-6-5/</link><guid isPermaLink="false">5f68100167fa6f0001d19554</guid><category><![CDATA[CentOS]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Torrent]]></category><category><![CDATA[SeedBox]]></category><category><![CDATA[Shell]]></category><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Thu, 26 Jun 2014 18:24:15 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p style="text-align:center">
![torrent file type pic](/content/images/2014/Jul/torrent300x300.png)
</p>
This is an updated version of the script posted at [Transmission SeedBox](http://transmissionseedbox.blogspot.de/2012/01/creating-seedbox-in-centos-6.html) for installing transmission on a CentOS (6.5) server. (Please see the orignial post for more details).
<p><strong>Three lines!</strong>:</p>
<pre># wget https://github.com/elijahpaul/install-transmission/raw/master/install-transmission.sh</pre>
<pre># chmod u+x install-transmission.sh</pre>
<pre># ./install-transmission.sh</pre>
<p>Done.</p>
<p>You can now use the <a href="https://code.google.com/p/transmisson-remote-gui/">transmisson-remote-gui</a> front-end app to access your install remotely, or access the web interface via the web URL <code>http://&lt;YOUR-SERVER-IP&gt;:9091</code></p>
<p>Go ahead and try downloading a torrent:<br>
<a href="http://releases.ubuntu.com/14.04/ubuntu-14.04-server-amd64.iso.torrent">Ubuntu 14.04 LTS Server (64bit)</a></p>
<p><strong>EDIT</strong>: What&apos;s updated from the original?</p>
<ul><li>Added <code>xz</code> to the install on line 9 to enable <code>tar</code> to untar the <code>transmission-2.84.tar.xz</code> file</li><li>~~Updated the libevent package from version <code>libevent-2.0.19-stable.tar.gz</code> to <code>libevent-2.0.21-stable.tar.gz</code><br>And switched the download URL from github to sourceforge (due to SSL handshake errors with some versions of wget and the orignal URL)~~</li>
<li>9th July 2015: Updated libevent to version <code>libevent-2.0.22-stable.tar.gz</code>  &amp; URL from dead sourceforge link to github (<a href="https://github.com/elijahpaul/install-transmission/issues/1" target="_blank">see here</a>)</li><li>Updated the CSF firewall restart command from <code>service csf restart</code> to <code>csf -r</code></li></ul>
<p>If you spot any errors or have any issues, please do leave a comment here or on <a href="https://github.com/elijahpaul/install-transmission">GitHub</a></p>
<p><em>source: <a href="http://transmissionseedbox.blogspot.de/2012/01/creating-seedbox-in-centos-6.html">http://transmissionseedbox.blogspot.de/2012/01/creating-seedbox-in-centos-6.html</a></em></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Password Protection using .htaccess]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Password protecting your folders using <strong>htaccess protection</strong> involves two files, <strong>.htaccess</strong> and <strong>.htpasswd</strong>.</p>
<p><strong>1.</strong> Use the following command in Linux to create the <strong>.htpasswd</strong> file (by the way the .htpasswd file can be named anything, doesn&apos;t have to be named &apos;.htpasswd&apos;). Replace <code>username</code> with the username</p>]]></description><link>https://elijahpaul.co.uk/password-protection-using-htaccess/</link><guid isPermaLink="false">5f68100167fa6f0001d19550</guid><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Thu, 12 Jun 2014 01:41:28 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Password protecting your folders using <strong>htaccess protection</strong> involves two files, <strong>.htaccess</strong> and <strong>.htpasswd</strong>.</p>
<p><strong>1.</strong> Use the following command in Linux to create the <strong>.htpasswd</strong> file (by the way the .htpasswd file can be named anything, doesn&apos;t have to be named &apos;.htpasswd&apos;). Replace <code>username</code> with the username you wish to use:</p>
<pre><code>htpasswd -c /path/to/.htpasswd username
</code></pre>
<p>The files contents will look something like this:</p>
<pre><code>username:dGRkPurkuWmW2
</code></pre>
<p><em>Alternatively you can use an <a href="http://www.htpasswdgenerator.net/">online .htpasswd generator</a> to create the password(s) you require and manually write them to your .htpasswd file.</em></p>
<p>Ensure that the webserver has access permissions to your .htpasswd file (and the parent folder it&apos;s located in). And be sure to create the .htpasswd file outside of a publicly accessible directory.</p>
<p>Each line in the .htpasswd file can contain a username and password combination, so add as many combinations as you require.</p>
<p><strong>2.</strong> Now create a plane text file in Windows or Linux inside the folder you wish to password protect with the following contents:</p>
<pre><code>AuthType Basic
AuthName &quot;Password Protected Area&quot;
AuthUserFile /path/to/.htpasswd
Require valid-user
</code></pre>
<p>Replace <code>/path/to/.htpasswd</code> with the full path to your .htpasswd file.</p>
<p>Your directory is now password protected.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Installing the MK Check Agent on CentOS 6 (64bit)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Download the script</p>
<pre><code># wget -O mk-agent-install.sh http://pastie.org/pastes/9233120/download
</code></pre>
<p>Make the script executable and run it. <strong>Important</strong>: you must be using CentOS 6 (64bit) and you must be root, otherwise this will not work.</p>
<pre><code># chmod u+x mk-agent-install.sh
</code></pre>
<pre><code># ./mk-agent-install.sh
Enter the IP address(es)</code></pre>]]></description><link>https://elijahpaul.co.uk/installing-the-mk-check-agent-on-centos-6-64bit/</link><guid isPermaLink="false">5f68100167fa6f0001d1954f</guid><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Thu, 29 May 2014 03:28:12 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Download the script</p>
<pre><code># wget -O mk-agent-install.sh http://pastie.org/pastes/9233120/download
</code></pre>
<p>Make the script executable and run it. <strong>Important</strong>: you must be using CentOS 6 (64bit) and you must be root, otherwise this will not work.</p>
<pre><code># chmod u+x mk-agent-install.sh
</code></pre>
<pre><code># ./mk-agent-install.sh
Enter the IP address(es) of your OMD/Nagios server(s) (separated by spaces): 
</code></pre>
<p>Enter the IP address(es) of your OMD/Nagios server(s) separated by spaces.</p>
<p>That&apos;s it.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Monitoring SNI SSL Certificate Expiration with Nagios]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Nagios&apos; plugin check_http can also be used to verify the validity/expiration of an SSL certificate.</p>
<p>However if your webserver uses SNI (multiple SSL certificates on the same IP address), you have to use the <code>--sni</code> switch. Otherwise information for the wrong (default) SSL certitificate will be shown:</p>]]></description><link>https://elijahpaul.co.uk/monitoring-sni-ssl-certificate-expiration-with-nagios/</link><guid isPermaLink="false">5f68100167fa6f0001d1954e</guid><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Fri, 09 May 2014 12:46:26 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Nagios&apos; plugin check_http can also be used to verify the validity/expiration of an SSL certificate.</p>
<p>However if your webserver uses SNI (multiple SSL certificates on the same IP address), you have to use the <code>--sni</code> switch. Otherwise information for the wrong (default) SSL certitificate will be shown:</p>
<pre><code>./check_http -H reddit.com -S -C 30,14
OK - Certificate &apos;notreddit.com&apos; will expire on Thu May 29 00:59:00 2014.
</code></pre>
<p>Note the wrong certificate common name.</p>
<p>For SNI enabled webservers, the switch <code>--sni</code> is a must:</p>
<pre><code>./check_http -H reddit.com -S --sni -C 30,14
OK - Certificate &apos;reddit.com&apos; will expire on Thu Apr 23 00:59:00 2015.
</code></pre>
<p>Source: <a href="http://www.claudiokuenzler.com/blog/431/check_http-ssl-certificate-wrong-hostname-issue-sni">check_http and SNI SSL certificates</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Cancel Amazon Web Services (AWS) Account Shortcut]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Shortcut link to permanently close your Amazon Web Services account:</p>
<p>Login as normal, use the link below.</p>
<p><a href="https://console.aws.amazon.com/billing/home?#/account">https://console.aws.amazon.com/billing/home?#/account</a></p>
<p>Scroll down to the cancel option;<br>
<img src="https://elijahpaul.co.uk/content/images/2014/Jul/Billing-Management-Console-2014-07-03-18-24-38-1.png" alt loading="lazy"></p>
<p>Tick the checkbox and click on &apos;Close Account&apos;</p>
<!--kg-card-end: markdown-->]]></description><link>https://elijahpaul.co.uk/cancel-amazon-web-services-aws-account-shortcut/</link><guid isPermaLink="false">5f68100167fa6f0001d19552</guid><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Thu, 03 Apr 2014 14:54:03 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Shortcut link to permanently close your Amazon Web Services account:</p>
<p>Login as normal, use the link below.</p>
<p><a href="https://console.aws.amazon.com/billing/home?#/account">https://console.aws.amazon.com/billing/home?#/account</a></p>
<p>Scroll down to the cancel option;<br>
<img src="https://elijahpaul.co.uk/content/images/2014/Jul/Billing-Management-Console-2014-07-03-18-24-38-1.png" alt loading="lazy"></p>
<p>Tick the checkbox and click on &apos;Close Account&apos;</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Using an external SMTP server with GitLab]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://storage.sbg-1.qnxcdn.com/v1/AUTH_38abc4487f494713a8d966250d77b68f/pubs/img/gitlab_smtp_opt.png" alt="gitlab_smtp_img" loading="lazy"></p>
<h2 id="gitlab7omnibus">GitLab 7 (Omnibus)</h2>
<p>Edit the <code>/etc/gitlab/gitlab.rb</code> file to match your SMTP servers settings and credentials. e.g.:</p>
<pre><code>gitlab_rails[&apos;smtp_enable&apos;] = true
gitlab_rails[&apos;smtp_address&apos;] = &quot;smtp.server&quot;
gitlab_rails[&apos;smtp_port&apos;] = 456
gitlab_rails[&apos;smtp_user_name&</code></pre>]]></description><link>https://elijahpaul.co.uk/using-an-smtp-server-with-gitlab/</link><guid isPermaLink="false">5f68100167fa6f0001d19551</guid><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Fri, 07 Mar 2014 03:50:21 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://storage.sbg-1.qnxcdn.com/v1/AUTH_38abc4487f494713a8d966250d77b68f/pubs/img/gitlab_smtp_opt.png" alt="gitlab_smtp_img" loading="lazy"></p>
<h2 id="gitlab7omnibus">GitLab 7 (Omnibus)</h2>
<p>Edit the <code>/etc/gitlab/gitlab.rb</code> file to match your SMTP servers settings and credentials. e.g.:</p>
<pre><code>gitlab_rails[&apos;smtp_enable&apos;] = true
gitlab_rails[&apos;smtp_address&apos;] = &quot;smtp.server&quot;
gitlab_rails[&apos;smtp_port&apos;] = 456
gitlab_rails[&apos;smtp_user_name&apos;] = &quot;smtp user&quot;
gitlab_rails[&apos;smtp_password&apos;] = &quot;smtp password&quot;
gitlab_rails[&apos;smtp_domain&apos;] = &quot;example.com&quot;
gitlab_rails[&apos;smtp_authentication&apos;] = &quot;login&quot;
gitlab_rails[&apos;smtp_enable_starttls_auto&apos;] = true

# If your SMTP server does not like the default &apos;From: gitlab@localhost&apos; you
# can change the &apos;From&apos; with this setting.
gitlab_rails[&apos;gitlab_email_from&apos;] = &apos;gitlab@example.com&apos;
</code></pre>
<p>Now run:</p>
<pre><code>gitlab-ctl reconfigure
</code></pre>
<p>Done.</p>
<h2 id="gitlab67manualinstall">GitLab 6 &amp; 7 (Manual Install)</h2>
<p>First edit the <code>config/environments/production.rb</code> file, to configure GitLab to use SMTP by default; change the line:</p>
<pre><code>config.action_mailer.delivery_method = :sendmail
</code></pre>
<p>to</p>
<pre><code>config.action_mailer.delivery_method = :smtp
</code></pre>
<p>Now make a copy of the <code>smtp_settings.rb.sample</code> file:</p>
<pre><code># cp config/initializers/smtp_settings.rb.sample config/initializers/smtp_settings.rb
</code></pre>
<p>And edit the appropiate settings (address, port, username, password, etc...):</p>
<pre><code># To enable smtp email delivery for your GitLab instance do next:
# 1. Rename this file to smtp_settings.rb
# 2. Edit settings inside this file
# 3. Restart GitLab instance
#
if Rails.env.production?
  Gitlab::Application.config.action_mailer.delivery_method = :smtp

  ActionMailer::Base.smtp_settings = {
    address: &quot;email.server.com&quot;,
    port: 465,
    user_name: &quot;smtp&quot;,
    password: &quot;123456&quot;,
    domain: &quot;gitlab.company.com&quot;,
    authentication: :login,
    enable_starttls_auto: true
  }
end
</code></pre>
<p>Here are a couple of example settings:</p>
<h4 id="gmail">Gmail####</h4>
<pre><code>config.action_mailer.smtp_settings = {
  :address              =&gt; &quot;smtp.gmail.com&quot;,
  :port                 =&gt; 587,
  :domain               =&gt; &apos;gmail.com&apos;,
  :user_name            =&gt; &apos;account@gmail.com&apos;,
  :password             =&gt; &apos;password&apos;,
  :authentication       =&gt;  :plain,
  :enable_starttls_auto =&gt; true
}
</code></pre>
<h4 id="mailgun">Mailgun####</h4>
<pre><code>config.action_mailer.smtp_settings = {
  :address              =&gt; &quot;smtp.mailgun.org&quot;,
  :port                 =&gt; 587,
  :domain               =&gt; &apos;gitlab.mydomain.com&apos;,
  :user_name            =&gt; &apos;user@mydomain.org&apos;,
  :password             =&gt; &apos;password&apos;,
  :authentication       =&gt;  :plain,
  :enable_starttls_auto =&gt; true
}
</code></pre>
<p>Restart GitLab &amp; nginx:</p>
<pre><code># service gitlab restart
# service nginx restart
</code></pre>
<p>That&apos;s it, you&apos;re done.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Installing GitLab 6.6 (6.x) on CentOS 6.5 with Percona Server 5.6]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><a href="http://gitlab.org/">GitLab</a> CE (Community Edition) is essentially a self-hosted opensource clone of the online Git code repository service <a href="https://github.com/">GitHub</a>.</p>
<p>The main <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/install/installation.md">installation guide</a> for GitLab was written for installation on Ubuntu/Debian operating systems.</p>
<p>This guide covers the steps required for a fresh install of GitLab production server on CentOS 6.</p>]]></description><link>https://elijahpaul.co.uk/installing-gitlab-6-6-x-on-centos-6-5-with-percona-server-5-6/</link><guid isPermaLink="false">5f68100167fa6f0001d1954b</guid><category><![CDATA[Nginx]]></category><category><![CDATA[GitLab]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><category><![CDATA[Versioning]]></category><category><![CDATA[Percona]]></category><category><![CDATA[CentOS]]></category><category><![CDATA[Ruby]]></category><category><![CDATA[MySQL]]></category><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Sun, 02 Feb 2014 08:19:28 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><a href="http://gitlab.org/">GitLab</a> CE (Community Edition) is essentially a self-hosted opensource clone of the online Git code repository service <a href="https://github.com/">GitHub</a>.</p>
<p>The main <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/install/installation.md">installation guide</a> for GitLab was written for installation on Ubuntu/Debian operating systems.</p>
<p>This guide covers the steps required for a fresh install of GitLab production server on CentOS 6.5 with <a href="http://www.percona.com/software/percona-server/ps-5.6">Percona Server 5.6</a> (drop-in replacement for MySQL&#xAE;).</p>
<p><strong>1.</strong> Install the EPEL and RPMForge packages,and update CentOS 6.5.</p>
<pre><code># rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

# rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

# yum -y update
</code></pre>
<p><strong>2.</strong> Enable the RPMForge Extras. Open the RPMForge repo.</p>
<pre><code>nano /etc/yum.repos.d/rpmforge.repo
</code></pre>
<p>and change <code>enabled = 0</code> under <code>[rpmforge-extras]</code> to <code>enabled = 1</code></p>
<p><strong>3.</strong> Add the EPEL SCL (Software Collections) repo to your CentOS installation.</p>
<pre><code># wget -P /etc/yum.repos.d http://people.redhat.com/bkabrda/scl_ruby193.repo
</code></pre>
<p><strong>4.</strong> Then install the Development tools.</p>
<pre><code># yum -y groupinstall &apos;Development Tools&apos;
</code></pre>
<p><strong>5.</strong> Install the Percona repo and Percona Server 5.6</p>
<pre><code># rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
# yum -y install Percona-Server-client-56 Percona-Server-server-56 Percona-Server-devel-56
</code></pre>
<p><strong>6.</strong> Ok. Now install ruby193, git, redis and nginx (as well as a few other required packages). And start all the services.</p>
<p>(* The install command below should cover all the packages required to enable a smooth install of GitLab. However, depending on your base install of CentOS 6.5, you may require some extra packages not listed below. The installation process is pretty good at making clear what you&apos;re missing if anything.)</p>
<pre><code class="language-#">
# service mysql start 
# service redis start 
# service nginx start 
# chkconfig mysql on
# chkconfig redis on
# chkconfig nginx on
</code></pre>
<p><strong>7.</strong> Add the user git and set the git globals.</p>
<pre><code># adduser -r -s /bin/bash -c &apos;Gitlab user&apos; -m -d /home/git git
# chmod o+x /home/git
# su - git
[git@gitlab ~]$ git config --global user.name &quot;GitLab&quot;
[git@gitlab ~]$ git config --global user.email &quot;gitlab@localhost&quot;
[git@gitlab ~]$ git config --global core.autocrlf input
</code></pre>
<p><strong>8.</strong> Still as the git user, clone the git repos.</p>
<pre><code>[git@gitlab ~]$ git clone https://github.com/gitlabhq/gitlab-shell.git
[git@gitlab ~]$ git clone https://github.com/gitlabhq/gitlabhq.git gitlab
</code></pre>
<p><strong>9.</strong> Extend your .bashrc ( <code>[git@gitlab ~]$ nano .bashrc</code> ) to have the following content which will load our scl environment for us when we login.</p>
<pre><code># .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

source /opt/rh/ruby193/enable

# User specific aliases and functions
</code></pre>
<p><strong>10.</strong> Make sure the paths are right. You can test this by executing <code>env</code>.</p>
<p><strong>11.</strong> Logout and login again as user gitlab ( <code>su - git</code> ). If everything went fine you should be able to do this.</p>
<pre><code>[git@gitlab ~]$ which ruby
/opt/rh/ruby193/root/usr/bin/ruby
[git@gitlab ~]$ ruby --version
ruby 1.9.3p327 (2012-11-10 revision 37606) [x86_64-linux]
</code></pre>
<p><strong>12.</strong> As git, cd into the git-shell dir and checkout version 1.7.1</p>
<pre><code>[git@gitlab ~]$ cd gitlab-shell/
[git@gitlab gitlab-shell]$ git checkout v1.8.0
</code></pre>
<p><strong>13.</strong> Copy the file <code>config.yml.example</code> to <code>config.yml</code> in the git-shell directory</p>
<pre><code>cp config.yml.example config.yml
</code></pre>
<p><strong>14.</strong> Install gitshell.</p>
<pre><code>[git@gitlab gitlab-shell]$ ./bin/install
</code></pre>
<p><strong>15.</strong> <strong>IMPORTANT</strong>. Do this still as the git user! Checkout gitlab 6.</p>
<pre><code>[git@gitlab ~]$ cd gitlab
[git@gitlab gitlab]$ git checkout 6-6-stable
</code></pre>
<p><strong>16.</strong> Now configure Gitlab and install the gems.</p>
<pre><code class="language-[git@gitlab">[git@gitlab gitlab]$ gem install json
[git@gitlab gitlab]$ gem install charlock_holmes --version &apos;0.6.9.4&apos;
</code></pre>
<p><strong>17.</strong> Time to configure GitLab. Copy the example GitLab config and set the necessary file permissions, ownership, and create the required directories using the following commands.</p>
<pre><code>[git@gitlab gitlab]$ cp /home/git/gitlab/config/gitlab.yml{.example,}
[git@gitlab gitlab]$ sed -i &apos;s/localhost/gitlab.local.domb.com/g&apos; /home/git/gitlab/config/gitlab.yml
[git@gitlab gitlab]$ chown -R git /home/git/gitlab/log/
[git@gitlab gitlab]$ chown -R git /home/git/gitlab/tmp/
[git@gitlab gitlab]$ chmod -R u+rwX /home/git/gitlab/log/
[git@gitlab gitlab]$ chmod -R u+rwX  /home/git/gitlab/tmp/
[git@gitlab gitlab]$ mkdir /home/git/gitlab-satellites
[git@gitlab gitlab]$ mkdir /home/git/gitlab/tmp/pids/
[git@gitlab gitlab]$ mkdir /home/git/gitlab/tmp/sockets/
[git@gitlab gitlab]$ chmod -R u+rwX /home/git/gitlab/tmp/pids/
[git@gitlab gitlab]$ chmod -R u+rwX /home/git/gitlab/tmp/sockets/
[git@gitlab gitlab]$ mkdir /home/git/gitlab/public/uploads
[git@gitlab gitlab]$ chmod -R u+rwX /home/git/gitlab/public/uploads
</code></pre>
<p><strong>18.</strong> Update your <code>/home/git/gitlab/config/gitlab.yml</code> file with your <code>host</code> (e.g. mygitlab.example.com), and email address  ( <code>email_from</code> ).</p>
<p><strong>19.</strong> Copy the <code>/home/git/gitlab/config/unicorn.rb.example</code> to <code>/home/git/gitlab/config/unicorn.rb</code> and configure it by uncommenting the follow section.</p>
<pre><code> 72    old_pid = &quot;#{server.config[:pid]}.oldbin&quot;
 73    if old_pid != server.pid
 74      begin
 75        sig = (worker.nr + 1) &gt;= server.worker_processes ? :QUIT : :TTOU
 76        Process.kill(sig, File.read(old_pid).to_i)
 77      rescue Errno::ENOENT, Errno::ESRCH
 78      end
 79    end
</code></pre>
<p><strong>20.</strong> Change the user and password in the <code>/home/git/gitlab/config/database.yml</code> file (create this file or rename the existing <code>database.yml.mysql</code> file).</p>
<pre><code>  production:
  adapter: mysql2
  encoding: utf8
  reconnect: false
  database: gitlabhq_production
  pool: 5
  username: gitlab
  password: &quot;mysql_password_here&quot;
  # host: localhost
  # socket: /tmp/mysql.sock
</code></pre>
<p><strong>21.</strong> Create a MySQL User in Percona</p>
<pre><code>[git@gitlab gitlab]$ mysql -u root
# Create a user for GitLab. (change &apos;mysql_password_here&apos; to a real password)
CREATE USER &apos;gitlab&apos;@&apos;localhost&apos; IDENTIFIED BY &apos;mysql_password_here&apos;;

# Create the GitLab production database
CREATE DATABASE IF NOT EXISTS `gitlabhq_production` DEFAULT CHARACTER SET `utf8` COLLATE `utf8_unicode_ci`;

# Grant the GitLab user necessary permissions on the table.
GRANT SELECT, LOCK TABLES, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER ON `gitlabhq_production`.* TO &apos;gitlab&apos;@&apos;localhost&apos;;
</code></pre>
<p><strong>22.</strong> Check you can login as the gitlab user.</p>
<pre><code>[git@gitlab gitlab]$ mysql -u gitlab -p -D gitlabhq_production
</code></pre>
<p><strong>23.</strong> As root, create a <code>gitlab.conf</code> nginx config file <code># nano /etc/nginx/conf.d/gitlab.conf</code>, and add the following to it.</p>
<pre><code>upstream gitlab {
  server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}
 
server {
  listen *:80 default_server;  # e.g., listen 192.168.1.1:80; In most cases *:80 is a better
  server_name gitlab.local.domb.com;  # e.g., server_name mygitlab.example.com;
  server_tokens off;     # don&apos;t show the version number, a security best practice
  root /home/git/gitlab/public;
 
  # individual nginx logs for this gitlab vhost
  access_log  /var/log/nginx/gitlab_access.log;
  error_log   /var/log/nginx/gitlab_error.log;
 
  location / {
    # serve static files from defined root folder;.
    # @gitlab is a named location for the upstream fallback, see below
    try_files $uri $uri/index.html $uri.html @gitlab;
  }
 
  # if a file, which is not found in the root folder is requested,
  # then the proxy pass the request to the upsteam (gitlab unicorn)
  location @gitlab {
    proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
    proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
    proxy_redirect     off;
 
    proxy_set_header   X-Forwarded-Proto $scheme;
    proxy_set_header   Host              $http_host;
    proxy_set_header   X-Real-IP         $remote_addr;
 
    proxy_pass http://gitlab;
  }
}
</code></pre>
<p><strong>24.</strong> (You may need to) Disable the nginx <code>default.conf</code> config file, and restart nginx.</p>
<pre><code># mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/defualt.conf.off
# service nginx restart
</code></pre>
<p><strong>25.</strong> From the gitlab dir execute the below to enable support with MySQL/Percona.</p>
<pre><code>[git@gitlab gitlab]$ bundle install --deployment --without development test postgres
[git@gitlab gitlab]$ bundle exec rake gitlab:setup RAILS_ENV=production

# Type &apos;yes&apos; to create the database tables.
</code></pre>
<p><strong>26.</strong> If everything installed without any errors you&apos;ll see:</p>
<pre><code>Administrator account created:

login.........admin@local.host
password......5iveL!fe
</code></pre>
<p><strong>27.</strong> As root get the init script.</p>
<pre><code># cp /home/git/gitlab/lib/support/init.d/gitlab /etc/init.d/gitlab
# chmod +x /etc/init.d/gitlab
</code></pre>
<p><strong>28.</strong> If you want you can now configure the mail server. Open <code>/etc/mail/sendmail.mc</code> and add or modify the following lines.</p>
<pre><code>Add:
define(`SMART_HOST&apos;, `smtp.example.com&apos;)dnl
Comment
dnl EXPOSED_USER(`root&apos;)dnl
</code></pre>
<p><strong>29.</strong> Restart sendmail <code># service sendmail restart</code></p>
<p><strong>30.</strong> Forward all email to a central mail address.</p>
<pre><code># echo adminlogs@example.com &gt; /root/.forward
# chown root /root/.forward
# chmod 600 /root/.forward
# echo adminlogs@example.com &gt; /home/git/.forward
# chown git /home/git/.forward
# chmod 600 /home/git/.forward
</code></pre>
<p><strong>31.</strong> Your done. Start up your server</p>
<pre><code># service gitlab start
# service nginx restart
</code></pre>
<p><strong>32.</strong> You can now browse to your GitLab URL <code>http://yourgitlabserver.com</code> and login using the username and password presented during the installation.</p>
<p>Sources:<br>
<a href="http://blog.domb.net/?p=616">GitLab 6.0 (6.x) installation instructions with SCL for RHEL 6.4 and CentOS 6.4</a><br><br>
<a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/install/installation.md">GitLab.org - installation.md</a></p>
<p>If you spot any errors above, please do drop me a comment below.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[PHP: openssl_pkey_new()]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><strong>Note to self:</strong></p>
<p>When generating a new private key using <code>openssl_pkey_new()</code>, and specifying the <code>private_key_bits</code> parameter as a variable.</p>
<p>Be sure to cast the variable to an integer!</p>
<p>e.g.</p>
<pre><code>$keysize = &quot;2048&quot;;
$privateKey = openssl_pkey_new(array(
    &apos;private_key_bits&apos; =&gt; $keysize,</code></pre>]]></description><link>https://elijahpaul.co.uk/php-openssl_pkey_new/</link><guid isPermaLink="false">5f68100167fa6f0001d1954a</guid><category><![CDATA[php]]></category><category><![CDATA[csr]]></category><category><![CDATA[ssl]]></category><category><![CDATA[openssl]]></category><category><![CDATA[private key]]></category><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Sat, 23 Nov 2013 20:14:53 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><strong>Note to self:</strong></p>
<p>When generating a new private key using <code>openssl_pkey_new()</code>, and specifying the <code>private_key_bits</code> parameter as a variable.</p>
<p>Be sure to cast the variable to an integer!</p>
<p>e.g.</p>
<pre><code>$keysize = &quot;2048&quot;;
$privateKey = openssl_pkey_new(array(
    &apos;private_key_bits&apos; =&gt; $keysize, // ---&gt; This won&apos;t work
    &apos;private_key_bits&apos; =&gt; (int)$keysize, // ---&gt; Correct
    &apos;private_key_bits&apos; =&gt; 2048, // ---&gt; Also correct.
    &apos;private_key_type&apos; =&gt; OPENSSL_KEYTYPE_RSA,
));
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Setup External SMTP Server to Send Nagios Notifcation Alerts (CentOS)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>By default Nagios uses localhost to send email notification alerts to designated admins. You can of course setup DNS records to white list your Nagios IP/Hostname in order to prevent notifications being marked as spam. But it&apos;s a lot easier to setup up Nagios to use an</p>]]></description><link>https://elijahpaul.co.uk/setup-external-smtp-server-to-send-nagios-notifcation-alerts-centos/</link><guid isPermaLink="false">5f68100167fa6f0001d19547</guid><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Fri, 28 Jun 2013 11:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>By default Nagios uses localhost to send email notification alerts to designated admins. You can of course setup DNS records to white list your Nagios IP/Hostname in order to prevent notifications being marked as spam. But it&apos;s a lot easier to setup up Nagios to use an external SMTP server.</p>
<p><a href="http://caspian.dotconf.net/menu/Software/SendEmail/" title="SendEmail" target="_blank">SendEmail</a> allows Nagios to do just that.</p>
<p><strong>1.</strong> Download the SendEmail tar file from <a href="http://caspian.dotconf.net/menu/Software/SendEmail/sendEmail-v1.56.tar.gz" title="sendEmail-v1.56.tar.gz">http://caspian.dotconf.net/menu/Software/SendEmail/sendEmail-v1.56.tar.gz</a></p>
<pre><code># wget http://caspian.dotconf.net/menu/Software/SendEmail/sendEmail-v1.56.tar.gz
</code></pre>
<p>&#xA0;<br>
<strong>2.</strong> Extract, and copy <code>sendEmail</code> to <code>/usr/local/bin</code></p>
<pre><code># tar -xvzf sendEmail-v1.56.tar.gz
# cp sendEmail-v1.56/sendEmail /usr/local/bin
</code></pre>
<p>&#xA0;</p>
<p><strong>3.</strong> SendEmail requires the Net::SSLeay and IO::Socket::SSL perl modules be installed to function, so install these if they&apos;re not already:</p>
<pre><code># yum install perl-Net-SSLeay
# yum install perl-IO-Socket-SSL
</code></pre>
<p>&#xA0;</p>
<p><strong>4.</strong> Now modify the notify-host-by-email and notify-service-by-email commands in Nagios&apos; <code>commands.cfg</code> file as follows:</p>
<pre><code>define command{
command_name notify-host-by-email
command_line /usr/bin/printf &quot;%b&quot; &quot;***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n&quot; | /usr/local/bin/sendEmail -f your_email@your_domain.com -s smtp.server.ip_or_hostname:portnumber -u &quot;** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **&quot; -t $CONTACTEMAIL$
}

define command{
command_name notify-service-by-email
command_line /usr/bin/printf &quot;%b&quot; &quot;***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$&quot; | /usr/local/bin/sendEmail -f your_email@your_domain.com -s smtp.server.ip_or_hostname:portnumber -u &quot;** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **&quot; -t $CONTACTEMAIL$
}
</code></pre>
<p>&#xA0;<br>
Replace <em>your_email@your_domain.com</em> and <em>smtp.server.ip_or_hostname:portnumber</em> with the email address and SMTP server address (and SMTP port number if applicable) with your external SMTP server details.</p>
<p>If your server requires authentication add the following lines to each command:</p>
<pre><code> -xu your_email_username -xp your_password
</code></pre>
<p>&#xA0;<br>
If your server utilizes TLS add:</p>
<pre><code> -o tls=yes
</code></pre>
<p><strong>5.</strong> Check out <a href="http://caspian.dotconf.net/menu/Software/SendEmail/" title="SendEMail" target="_blank">http://caspian.dotconf.net/menu/Software/SendEmail</a> for more command line options.</p>
<p><strong>6.</strong> Check your modified Nagios configuration is ok:</p>
<pre><code># /etc/init.d/nagios checkconfig
</code></pre>
<p><strong>7.</strong> Restart your Nagios service:</p>
<pre><code># /etc/init.d/nagios restart
</code></pre>
<p>Your done. Nagios will now send alert notifcations via your external SMTP server.</p>
<p>Source: <a href="http://ricochen.wordpress.com/2012/04/18/use-external-smtp-server-to-send-nagios-alerts/" title="Nagios Alerts" target="_blank">http://ricochen.wordpress.com/2012/04/18/use-external-smtp-server-to-send-nagios-alerts</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Installing a RapidSSL certificate on Zimbra 8.0]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Since I couldn&apos;t find a straight forward tutorial for installing a RapidSSL (or any other) Commerical Certificate on Zimbra 8.0, I decided to write one for reference if not anything else.</p>
<p>RapidSSL Commercial Certificates offer a cost effective way to add a commercial cert to your Zimbra</p>]]></description><link>https://elijahpaul.co.uk/installing-a-rapidssl-certificate-on-zimbra-8-0/</link><guid isPermaLink="false">5f68100167fa6f0001d19546</guid><category><![CDATA[Zimbra]]></category><category><![CDATA[RapidSSL]]></category><category><![CDATA[SSL Installation]]></category><category><![CDATA[SSL]]></category><dc:creator><![CDATA[Elijah Paul]]></dc:creator><pubDate>Tue, 18 Dec 2012 12:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Since I couldn&apos;t find a straight forward tutorial for installing a RapidSSL (or any other) Commerical Certificate on Zimbra 8.0, I decided to write one for reference if not anything else.</p>
<p>RapidSSL Commercial Certificates offer a cost effective way to add a commercial cert to your Zimbra server.</p>
<p>The easiet method to install a RapidSSL cert is via the CLI as the root user.</p>
<p><strong>1. </strong> Start by logging into your Zimbra servers CLI via SSH.</p>
<p><strong>2. </strong> As root, begin by generating a Certificate Signing Request (CSR). Below replace &apos;mail.yourdomain.com&apos; with the FQDN of your Zimbra server.</p>
<pre><code># /opt/zimbra/bin/zmcertmgr createcsr comm -new -keysize 2048 -subject &quot;/C=GB/ST=England/L=London/O=Company Name/OU=Company Branch Name/CN=mail.yourdomain.com&quot; -subjectAltNames mail.yourdomain.com
</code></pre>
<p>The above command includes the following codes:</p>
<p><strong>/C = Country: The Country is a two-digit code -- for the United Kingdom, it&apos;s &apos;GB&apos;. A list of country codes is available here -</strong></p>
<p><strong>/ST = State: State is a full name, i.e. &apos;California&apos;, &apos;Scotland&apos;.</strong></p>
<p><strong>/L = Locality: Locality is a full name, i.e. &apos;London&apos;, &apos;New York&apos;.</strong></p>
<p><strong>/O = Organization: The Organization Name is your Full Legal Company or Personal Name, as legally registered in your locality.</strong></p>
<p><strong>/OU = Organizational Unit: The Organizational Unit is whichever branch of your company is ordering the certificate such as accounting, marketing, etc.</strong></p>
<p><strong>/CN = Common Name: The Common Name is the Fully Qualified Domain Name (FQDN) for which you are requesting the ssl certificate. This will be the FQDN of your Zimbra server, e.g. mail.yourdomain.com or zimbra.yourdomain.com</strong></p>
<p><strong>3.</strong> Now upload/send the certificate request <em>(Zimbra saves it to &apos;/opt/zimbra/ssl/zimbra/commercial/commercial.csr&apos;)</em> to your SSL provider. They will most likely provide you with your Commercial Certificate via an email in the form of text or an attached file.</p>
<p><strong>4. </strong>Save your Commercial Certificate in a temporary file. If it was provided as plain text, you can cut and paste it into a new file using<br>
<code>nano</code></p>
<pre><code># nano /tmp/commercial.crt
</code></pre>
<p><strong>5. </strong> Download and save the root Certificate Authority (CA) for RapidSSL certificates to a temporary file. (e.g. /tmp/ca.crt). Again you can cut and paste the CA text into a new file using nano.</p>
<pre><code># nano /tmp/ca.crt
</code></pre>
<p><strong>The root CA for RapidSSL certificates is provided by GeoTrust and can be found here</strong> - <a href="https://ssltest12.bbtest.net/" title="GeoTrust Root CA" target="_blank">https://ssltest12.bbtest.net/</a></p>
<p><strong>6. </strong> Download any intermediary CAs from your SSL provider, again to a temporary file. (e.g. /tmp/ca_intermediary.crt). RapidSSL certs usually come with a single intermediary certificate. Once again, if the intermediary certificate is provided as plain text cut and paste it using <code>nano</code></p>
<pre><code># nano /tmp/ca_intermediary.crt
</code></pre>
<p><strong>7. </strong> Combine root and intermediary CAs into a temporary file.</p>
<pre><code># cat /tmp/ca.crt /tmp/ca_intermediary.crt &gt; /tmp/ca_chain.crt
</code></pre>
<p><strong>8. </strong> Verify your commercial Certificate:</p>
<pre><code># /opt/zimbra/openssl/bin/openssl verify -CAfile /tmp/ca_chain.crt /tmp/commercial.crt
</code></pre>
<p><strong>9. </strong> Deploy your commercial certificate</p>
<pre><code># /opt/zimbra/bin/zmcertmgr deploycrt comm /tmp/commercial.crt /tmp/ca_chain.crt
</code></pre>
<p><strong>10. </strong> To finish, verify the certificate was deployed.</p>
<pre><code># /opt/zimbra/bin/zmcertmgr viewdeployedcrt
</code></pre>
<p><strong>11. </strong> Restarting Zimbra services will ensure the new commercial certificate takes effect</p>
<pre><code># su zimbra
# zmcontrol restart
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>