Context

Alien4cloud is an open-platform easing DevOps implementation in large and heterogeneous organizations.
Whenever possible and needed, Alien4Cloud provides the model and tools allowing to ensure applications portability across different infrastructures, leveraging the OASIS TOSCA standard whose goal is to ‘Enhance the portability and operational management of cloud applications and services across their entire lifecycle‘.
But, in order to be able to manage all automation and DevOps scenarios, it is important as well to be able to onboard easily applications that leverage any infrastructure resource, even those potentially not portable.

The objective of this post is to explain the concept of Custom on-demand resource which helps reaching that goal and opens the door to large number of possibilities such as the one to leverage easily resources offered by a Cloud provider (knowing that it might restrict portability of the application outside of this Cloud provider infrastructure).

In the end, for some users, it’s an interesting possibility as it allows to benefit from Alien4Cloud collaborative and productivity Devops features for the enterprise even if portability is not a prime concern at first.

Solution

We can now extend the capabilities of our orchestrator by providing custom on-demand resources. For example, this can be used to take the advantages of AWS services that are not managed natively by our orchestrator (such as Elastic Load Balancer, RDS Database and so on …). In combination with recently added Ansible support as implementation artifact, it’s now easy to leverage on extended IAAS services in your topologies.

Usually in Alien4Cloud, we distinguish two kinds of components:

  • on-demand resources: components that are usually IAAS resources (computes, networks, volumes), provisioned and managed by the orchestrator. Their types are provided by the orchestrator plugin. Usually, these components are declared in the topology using abstract types and matched with concrete implementations provided by the orchestrator plugin at deployment stage.
  • applicative components: what you really deploy on VMs (an apache server, a JDK, a Nginx …). Their types are provided by users, packaged as CSARs. A CSAR is composed of  a description of the component (in TOSCA DSL) and some implementation artifacts (in other words: scripts that may be executed in order to perform required operations to install/uninstall/manage the component). Until now, we were only supporting bash/bat scripts as implementations artifacts’ underlying language.

Two features have been recently added to Alien4Cloud:

  • Ability to manage Ansible blueprints (as a new kind of implementation artifact).
  • Ability to provide your own custom on-demand resources types.

What is a custom on-demand resource ?

A custom on-demand resource is a IAAS resource for which you provide your own type using the language of your choice (as long as it is recognized by Alien4Cloud and the underlying orchestrator).

Let’s take an example: our Cloudify 3 orchestrator plugin (the wire between A4C and Cloudify) and specifically its AWS support can manage a limited range of AWS resources:

  • computes (EC2 instances).
  • networks.
  • block storages.

However, AWS provides a lot of services you would want to leverage in order to build your application stacks:

  • Elastic Load Balancers.
  • RDS Databases.
  • S3 Buckets.

You are now able to extend the capabilities of the A4C/Cloudify chain by providing your own types / scripts to provision IAAS resources.

Custom on-demand resource scenario walkthrough

We will illustrate the combination of these 2 features (Ansible support and custom on-demand resources) in a scenario where will will:

  • start AWS EC2 instances using Ansible.
  • Install Apache on them using Ansible.
  • start an AWS Elastic Load Balancer instance in front of them using Ansible.

Ansible EC2 Instance

First of all we will use Ansible to provision our VMs.

Here is the TOSCA description of the type:

node_types:
  alien.ansible.EC2Instance:
    derived_from: tosca.nodes.Compute
    properties:
      image_id:
        type: string
        required: true
      instance_type:
        type: string
        required: true
      security_group_ids:
        type: string
        required: true
    attributes:
      ip_address: { get_operation_output: [ SELF, Standard, start, ip_address ] }
      instance_id: { get_operation_output: [ SELF, Standard, start, instance_id ] }
  interfaces:
    Standard:
      start:
        inputs:
          PLAYBOOK_ENTRY: start.yml
          instance_type: { get_property: [ SELF, instance_type ] }
          image: { get_property: [ SELF, image_id ] }
          group: { get_property: [ SELF, security_group_ids ] }
        implementation: playbooks/EC2Instance/playbook.ansible
      stop:
        inputs:
          PLAYBOOK_ENTRY: stop.yml
          instance_id: { get_attribute: [ SELF, instance_id ] }
        implementation: playbooks/EC2Instance/playbook.ansible
      delete:
        inputs:
          PLAYBOOK_ENTRY: delete.yml
          instance_id: { get_attribute: [ SELF, instance_id ] }
        implementation: playbooks/EC2Instance/playbook.ansible

The Ansible role concerning the EC2 instance start operation:

- name: Provision an EC2 instance
  ec2:
    group_id: "{{ group }}"
    instance_type: "{{ instance_type }}"
    image: "{{ image }}"
    wait: true
    exact_count: "1"
  register: ec2

- name: set ip_address
  set_fact: ip_address={{ item.private_ip }}
  with_items: "{{ ec2.instances }}"

- name: set instance_id
  set_fact: instance_id={{ item.id }}
  with_items: "{{ ec2.instances }}"

We use facts to expose the ip_address and the instance_id as operation outputs. ip_address is exposed so the orchestrator will be able to connect to it and to execute other operations related to this instance. The instance_id is used by stop or delete Ansible roles.

Once packaged as a CSAR and uploaded into Alien4Cloud, you can use this type in your topology to define compute nodes.

Ansible Apache playbook & TOSCA description

We provide our apache playbook as a set of roles for each lifecycle operations (create, start, stop, delete) and describe the component using the TOSCA DSL:

node_types:
  alien.nodes.AnsibleApache:
    derived_from: tosca.nodes.WebServer
    properties:
      port:
      type: integer
      default: 80
    interfaces:
      Standard:
        create:
          inputs:
            PLAYBOOK_ENTRY: create.yml
          implementation: playbook/apache.ansible
        configure:
          inputs:
            port: { get_property: [SELF, port] }
            PLAYBOOK_ENTRY: configure.yml
          implementation: playbook/apache.ansible
        start:
          inputs:
            PLAYBOOK_ENTRY: start.yml
          implementation: playbook/apache.ansible
        stop:
          inputs:
            PLAYBOOK_ENTRY: stop.yml
          implementation: playbook/apache.ansible
        delete:
          inputs:
            PLAYBOOK_ENTRY: delete.yml
          implementation: playbook/apache.ansible

The playbook roles are quite simple. Here is the create role tasks for the create operation:

- name: Install Apache server
  yum: name=httpd state=present

The start role tasks for the start operation:

- name: Start Apache server
  service: name=httpd state=started

Elastic Load Balancer playbook and TOSCA description

This is a typical on-demand custom operation: we want to come up with an Ansible playbook that will provision an Elastic Load Balancer in AWS. Our orchestrator doesn’t know yet how to perform such an operation. We will provide a playbook, described by it’s TOSCA descriptor and packaged as a CSAR. We will also provide the relationship type that will wire an ELB to an EC2 instance.

node_types:
  alien.ansible.EC2ELB:
    derived_from: tosca.nodes.Root
    properties:
      name:
        type: string
        required: true 
      load_balancer_port:
        type: integer
        required: true
        default: 80 
      instance_port:
        type: integer
        required: true
        default: 80 
      zone:
        type: string
        required: true 
    attributes:
      dns_name: { get_operation_output: [ SELF, Standard, start, dns_name ] }
    requirements:
      - proxy: alien.ansible.capabilities.EC2Instance
        type: alien.ansible.relationships.ELBProxyEC2Instance
        lower_bound: 0
        upper_bound: unbounded
    interfaces:
      Standard:
        start:
          inputs:
            PLAYBOOK_ENTRY: start.yml
            name: { get_property: [ SELF, name ] }
            load_balancer_port: { get_property: [ SELF, load_balancer_port ] }
            instance_port: { get_property: [ SELF, instance_port ] }
            zone: { get_property: [ SELF, zone ] }
          implementation: playbooks/EC2ELB/playbook.ansible
        delete:
          inputs:
            PLAYBOOK_ENTRY: delete.yml
            name: { get_property: [ SELF, name ] }
          implementation: playbooks/EC2ELB/playbook.ansible
 
capability_types:
  alien.ansible.capabilities.EC2Instance:
    derived_from: tosca.capabilities.Root

relationship_types:
  alien.ansible.relationships.ELBProxyEC2Instance:
    derived_from: tosca.relationships.ConnectsTo
    valid_target_types: [ alien.ansible.capabilities.EC2Instance ]
    interfaces:
      configure:
        add_target:
          inputs:
            PLAYBOOK_ENTRY: add_target.yml
            lb_name: { get_property: [ SOURCE, name ] }
            instance_id: { get_attribute: [ TARGET, instance_id ] }
          implementation: playbooks/ELBProxyEC2Instance/playbook.ansible
        remove_target:
          inputs:
            PLAYBOOK_ENTRY: remove_target.yml
            lb_name: { get_property: [ SOURCE, name ] }
            instance_id: { get_attribute: [ TARGET, instance_id ] }
          implementation: playbooks/ELBProxyEC2Instance/playbook.ansible

We have also added a capability to our EC2 instance that will be the target of the relationship.

Here is the Ansible role tasks for the create operation of the alien.ansible.EC2ELB type:

- local_action:
    module: ec2_elb_lb
    name: "{{ name }}"
    state: present
    zones:
      - "{{ zone }}" 
    listeners:
      - protocol: http 
        load_balancer_port: "{{ load_balancer_port }}"
        instance_port: "{{ instance_port }}"
        proxy_protocol: True

- action:
    module: ec2_elb_facts
    names:
      - "{{ name }}"
  register: elb_facts

- name: set tmp_dns_name
  set_fact: dns_name={{ item.dns_name }}
  with_items: "{{ elb_facts.elbs }}"

We use the ec2_elb_lb Ansible module to instantiate an HTTP proxy and we expose it’s DNS name as an attribute.

The add_target operation will associate an EC2 instance to the load balancer:

- name: Instance Register
  local_action:
    module: ec2_elb
    instance_id: "{{ instance_id }}"
    ec2_elbs: "{{ lb_name }}"
    state: present
    wait: no

TOSCA topology

Once packaged as CSARs and uploaded into Alien4Cloud, we can use all these types to create a topology that fits our needs: a scalable EC2 instance, hosting an Apache, proxied by an Elastic Load Balancer.capture-decran-2017-01-03-a-19-41-28

At deployment stage, we won’t have anything to match since we don’t have any abstract node in our topology.

capture-decran-2017-01-03-a-20-20-01

After deployment we can see our instance in the AWS console:

capture-decran-2017-01-03-a-20-39-11

 

And of course our ELB instance with its single EC2 instance target:

capture-decran-2017-01-03-a-20-41-19

Let’s try the URL given by the dns_name attribute of the EC2ELB node (We have modified the index.html so that it displays the instance’s private IP address):

capture-decran-2017-01-03-a-20-43-24

 

Let’s scale the EC2Instance node to duplicate our Apache server:

capture-decran-2017-01-03-a-20-45-22

After the operation complete, we can see our 2 EC2 instances and check that our ELB has now two targets:

capture-decran-2017-01-03-a-20-54-27

capture-decran-2017-01-03-a-20-47-21

The new one is OutOfService status since the apache installation has not been terminated and the health check failed for the moment.

After a few seconds, if we refresh the browser, we can alternatively see each instance:

capture-decran-2017-01-03-a-20-51-46capture-decran-2017-01-03-a-20-51-58

Conclusion

We have built a non-portable topology that uses the specificities of AWS services. Still, this is not the main goal of Alien4Cloud / TOSCA, which aim at helping us define abstract topologies that can be deployed on any IAAS. But sometimes, you would like to explicitly take advantage of specific services offered by cloud providers.

In the next version of Alien4Cloud, you will be able to associate custom-on demand resource types with orchestrator locations, and admins will be able to define on-demand resource nodes (in the same way they already define on-demand resources like computes, networks, volumes …). You’ll then be able to use abstract types on your topology and choose custom on-demand concrete implementations at matching stage of the deployment.

Our full samples are available on GitHub:

 

 

 

 

 

Leave a Reply