Cloud security - CSA domains



0 comments
This is the second post in the blog series on Cloud security. You can see the first blog post here

The Cloud security alliance group provides actionable best practices for businesses to transition to cloud services while mitigating the risk involved in doing so. As per the latest version of CSA guide The critical areas of focus in cloud computing is divided into fourteen domains



Cloud Security - Risk factors



0 comments
Cloud security is a major consideration for enterprise wide cloud adoption, especially public cloud. This is part 1 of a serious of blog posts , where I am planning to pen down the different dimensions of Cloud security, starting with the risk factors of cloud adoption.

The various attributes of security risks  involved in the process can be summed up as follows:


ENISA* recommends the following  risk areas to be taken into account, while embarking on a cloud adoption journey

OpenStack icehouse installation error : nova-api service getting stopped



0 comments
While trying to install OpenStack icehouse, faced an issue with nova-api service.It was not getting started. The following error was coming up in the Nova-api log

Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c
Exit code: 1
.......

 nova Stdout: ''
2014-10-17 07:21:08.058 27270 TRACE nova Stderr: 'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 6, in <module>\n    from oslo.rootwrap.cmd import main\nImportError: No module named rootwrap.cmd\n'


Problem was with one of the oslo.rootwrap module. It was broken

Solution is to upgrade the module using pip

 #pip install oslo.rootwrap --upgrade

OpenStack: Restrict instance deletion



0 comments
In OpenStack, by default users who are members of a  tenant can delete all instances in that tenant, even if it is spinned up by other users. If you want to restrict that, you  need to tweak the nova policy file  ie /etc/nova/policy.json


Add the following lines in the file:

    "admin_or_user":"is_admin:True or user_id:%(user_id)s",
    "compute:delete":"rule:admin_or_user",

Make the same changes in the /etc/openstack-dashboard/nova_policy.json file also

Now restart the openstack-nova-api service

Now user will be able to delete only those instances spinned up by them. Admin users will be able to delete all instances

OpenStack : Assign floating IP using heat template



0 comments
Creating Yaml templates that assign floating IPs to your instances being spawned can be a bit tricky.Let us look at a scenario where we need to spin up a VM, assign a floating IP from a pool and make reference to this floating IP in your userdata as well. We will make use of the network ID of the internal and external network, as well as the Subnet ID of the internal network

The logical workflow is as follows:

  •  Create a port resource using internal network and internal subnet IDs
  • Create a floating IP resource , referring to the external network ID
  •  Associate the floating IP to the port
  •   In the server resource being created, associate the port resource
  Now we will see how this can be implemented using both HOT and AWS template formats

OpenStack monitoring: Zabbix Ceilometer proxy installation



0 comments
Recently a Ceilometer proxy for Zabbix was released by OneSource. This proxy will pull all the instance information from OpenStack and populate it in Zabbix

The source code can be downloaded from here:

https://github.com/OneSourceConsult/ZabbixCeilometer-Proxy

The basic prerequisites for the server where the proxy is running is Python and Pika library. Also there should be network connectivity from the proxy machine to your OpenStack installation.


Agentless openstack monitoring using zabbix



0 comments
Zabbix can be a tough cookie to crack!! And if you are planning to monitor Openstack using Zabbix, there is lot of additional work to be done .More so, if you want to go the agentless way, ie using SNMP

So, here we go.I am using Ubuntu 12.04 OS, both for my Zabbix server as well as openstack nodes

  • First you need to install the following packages using apt-get in the machine being monitored ie the openstack node

Tech tip: Increase openstack project quota from command line



0 comments
1. List the keystone tenants and search for the required tenant

keystone tenant-list |grep <tenantname>

 Note the id of the tenant being displayed. You need to use this id in the next command

2. Get quota details of the tenant using the following command

nova-manage project quota <tenantid>


Instances goes to paused state in Openstack Havanna



0 comments
Issue: 

All instances in openstack will be in paused node. You will not be able to create new instances or switch on any of the paused instances

Reason: 

Most often the reason will be lack of disk space in your compute node. By default the instances are created in the /var/lib/nova/instances folder of the compute node. This location is defined by the parameter "instances_path" in nova.conf of the  compute node. If your "/" partition is running out of disk space, then you cannot perform any instance related operations

Solution: 

  • Change the "instances_path" location to a different location. Ideally you could attach an additional disk and mount it to a directory and update the directory path in the "instance_path" variable.
     
  • Problem arises when you already have a number of instances  in the previous folder. You should move them over to the new location.
  •  Also you should set the group and ownership of the new instances folder to "nova" user, so that the permissions, ownership and group memberships are same as that of the previous folder


Openstack havanna neutron agent-list alive status error



0 comments
In some scenarios, the openstack neutron-agent status will show as xxx even though you could see he neutron agents services are up and running in the network and compute nodes. Also you could see a fluctuation in the agent status if you try the agent-list command repeatedly.  Confusing, right?

Actually  the problem is not in the actual agent status, but with two default configurations in neutron.conf ie agent_down_time and report_interval. It is the interval during which neutron will check the agent status. There is a bug reported against this issue

https://bugs.launchpad.net/neutron/+bug/1293083

As per the details in the bug " report_interval" is how often an agent sends out a heartbeat to the service. The Neutron service responds to these 'report_state' RPC messages by updating the agent's heartbeat DB record. The last heartbeat is then compared to the configured agent_down_time to determine if the agent is up or down"

The neutron agent-list command uses the agent_down_time value to display the status. The default values are set very low, because of which the alive status is shown as down/fluctuating.

Solution: As suggested in the solution for the bug, update the values of agent_down_time and report_interval to 75 and 30 seconds respectively. Since the above mentioned rpc issue with open-vswitch agent in compute is  resolved by this, all the agents will be shown as alive
older post