Accessing IBM SVC/Storwize devices with Python and SSH

Recently I have posted a simple example of how to fetch storage related data from IBM Spectrum Control with the help of Python and REST API.

Now lets turn the power of Python to access IBM Storwize or SVC system directly with SSH and run “lssystem” command on it. In the example below I use a very nice “paramiko” module, which handles SSH protocol and simplifies the task.

As you can see, nothing difficult here:

#!/usr/bin/python3

# -----------------------------------------------------------------------------
# "THE BEER-WARE LICENSE" (Revision 42):
# zmey20000@yahoo.com wrote this file. As long as you retain this notice you
# can do whatever you want with this stuff. If we meet some day, and you think
# this stuff is worth it, you can buy me a beer in return Mikhail Zakharov
# -----------------------------------------------------------------------------

import paramiko


target = '192.168.1.1'
login = 'mylogin'
password = 'mypassword'
command = 'lssystem -delim \,'


def ssh_exec(command, target, user, password, port=22):
    client = paramiko.SSHClient()
    client.set_missing_host_key_policy(paramiko.AutoAddPolicy())

    try:
        client.connect(target, username=user, password=password, port=port)
    except:
        print('FATAL: Unable to log in')
        exit(1)

    stdin, stdout, stderr = client.exec_command(command)

    error = stderr.read()
    if error:
        error = error.decode('US-ASCII')
        print('Error running the command:{}'.format(error))
        client.close()
        return 0

    data = stdout.read()
    client.close()

    return data.decode('US-ASCII')


lssystem = ssh_exec(command, target, login, password)
print(lssystem)

Advertisements
Posted in IRL, Storage, Storage Automation, Tips & tricks | Tagged , , , , , , , | Leave a comment

Getting data from IBM Spectrum Control. RESTful API usage example in Python

Searching for a handy way to fetch data from IBM Spectrum Control (earlier versions are called Tivoli Storage Productivity Center (TPC)), I have found a perfect IBM storage blog: https://storagemvp.wordpress.com.

Among other interesting topics it describes several methods to export SC/TPC data, so I have immediately contacted it’s author, Dominic Pruitt, and he gave me useful advises and hints. Thank you very much, Dominic!

Because of it’s simplicity, the most attractive way for me is to use RESTful API.

It is rather new interface to Spectrum Control and that’s why it is not very well documented. Nevertheless it’s totally enough to start coding simple data exporter in Python.

Below is the example I wrote using brilliant “requests” library which makes HTTP as friendly as possible. It collects and lists information about Storage systems configured under Spectrum Control.

#!/usr/bin/python3

# -----------------------------------------------------------------------------
# "THE BEER-WARE LICENSE" (Revision 42):
# zmey20000@yahoo.com wrote this file. As long as you retain this notice you
# can do whatever you want with this stuff. If we meet some day, and you think
# this stuff is worth it, you can buy me a beer in return Mikhail Zakharov
# -----------------------------------------------------------------------------

import requests

username = 'admin'
password = 'password'

base_url = 'https://sc.server.local:9569/srm/'
login_form = base_url + 'j_security_check'

rest_root = base_url + 'REST/api/v1/'
rest_StorageSystems = rest_root + 'StorageSystems'

StorageSystems = [
	{
		'Name': 'Name', 'ID': 'ID', 'Type': 'Type', 'Model': 'Model',
		'Firmware': 'Firmware', 'IP Address': 'IP Address',
		'Serial Number': 'Serial Number', 'Vendor': 'Vendor',
        'Pool Capacity': 'Pool Capacity', 'Used Pool Space': 'Used Pool Space',
        'Available Pool Space': 'Available Pool Space'
	}
]


def get_restful(requests_session, url):
    rq = requests_session.get(url)
    if rq.status_code != 200:
        print('Unable to open: {}, status code: {}'.format(url, r.status_code))
        exit(1)

    content_type = rq.headers.get('content-type')
    if content_type != 'application/json':
        print('Unsupported Content-Type: {}. We want JSON'.format(content_type))
        exit(1)

    return rq.json()


s = requests.Session()
s.verify = False

print('Logging into Spectrum Control', flush=True)
r = s.post(login_form, data={'j_username': username, 'j_password': password})
if r.status_code != 200:
    print("Can't open login form. Status code: {}".format(r.status_code))
    exit(1)

print('Checking if we can speak RESTful API', flush=True)
get_restful(s, rest_root)

print('Requesting Storage Systems information', flush=True)
tpc_storages = get_restful(s, rest_StorageSystems)

# Parse storage systems and save essential fields  
for storage in tpc_storages:
	StorageSystems.append(
		{
			'Name': storage['Name'], 'ID': storage['id'],
			'Type': storage['Type'], 'Model': storage['Model'],
			'Firmware': storage['Firmware'],
			'IP Address': storage['IP Address'], 
			'Serial Number': storage['Serial Number'],
			'Vendor': storage['Vendor'],
			'Pool Capacity': storage['Pool Capacity'], 
			'Used Pool Space': storage['Used Pool Space'],
			'Available Pool Space': storage['Available Pool Space']
		}
	)

print(StorageSystems)
Posted in Storage, Storage Automation | Tagged , , , , , , , | 1 Comment

D81S – simple SAN visualisation tool

D81S – my SAN visualisation tool, which I develop to build a full map of a Fibre Channel Storage Area Network by tracing all paths from HBAs to storage logical devices. D81S scans SAN switches and storage systems to create a database of all volumes which are accessible by hosts and the same LUNs provided by storage systems.

Two years ago I was working on it with passion, but later I have lost my accesses to most parts of that storage environment. So I’m unable to continue my work on it, so I decided to to share the source.

If anybody is interested in it, I can help to install it on his environment, test or even continue to develop it.

d81s_screenshot

Posted in D81S, My projects, Storage, Storage Automation | Tagged , , , , , , | Leave a comment

Another challenge for those, who can exit vi

Using ed, write a hello_world.txt file with two lines “Hello World!” and “ed is the best editor”. Don’t forget to exit 🙂

Posted in Offtop | Tagged , , | 1 Comment

First sketches of the new BeaST Grid family storage system

“The BeaST Grid” is a work name of the reliable storage cluster concept. It will consist of a few Controller Nodes with optional internal drives and several Drive only Nodes. All nodes are commodity computers with internal drives.

Controller Nodes will also be able to work in driveless, standalone mode and therefore may be used as storage virtualizators for other storage systems.

First version of the BeaST Grid will be based on the BeaST Classic with RAID system.

Posted in BeaST, My projects, Storage | Tagged , , , , , , | 2 Comments

The BeaST storage system with ZFS and CTL HA, latest news

Finally, I did it! The BeaST storage system with ZFS and CTL HA works in ALUA mode with zpools balanced over controllers. The BeaST Quorum automates Failover/Failback operations.

Yes, I have something to say now:

Read the full description on the BeaST project page: The BeaST Classic – dual-controller storage system with ZFS and CTL HA

Known limitations:

  • I have to use virtual shared drives for cache (in-memory cache mirroring works badly for now, but I will win this battle someday).
  • After a controller failure/recovery occurs, a special offline procedure must be used to re-balance zpools over controllers.

As usual, the BeaST is on early development stage, do not run it in production!

I need testers to check if it works for someone else then me 🙂

Posted in BeaST, BeaST Quorum, Storage | Tagged , , , , , , , , , , | Leave a comment

Fighting with ZFS to make zpools failover/failback possible

After success with single zpool switching, I’m trying now to create a reliable and balanced construction with two controllers and two pools.

When one of the controllers is dead, its pool successfully migrates to the other controller. But when the controller recovers, zpool rebalancing is needed.

And here rises a huge problem of “detaching” a pool from the running controller online. I tried to send commands with ctladm to stop the appropriate LUN and even remove/create it again. But it doesn’t work properly for me at least for now. Unbelievably, but it seems the only reliable way to detach a LUN from the frontend is to stop ctld!

It turns out, the BeaST will have offline rebalancing procedure for now. Until the offline rebalancing take place, the BeaST will have to work with one active controller and the other forwarding data only.

The second issue is related to ZIL mirroring. Adding appropriate definitions to ctl.conf doesn’t work properly for CTL HA configuration for many reasons, and the main of them is that HA mode requires right the same LUNs to be defined on both nodes. And it looks impossible to exclude ZIL LUNs from HA configuration. I tried to start two instances of ctld with different confiuration files for backend ZIL mirroring and front-end ports, but it doesn’t help me to avoid CTL HA issue on backend.

Then I have tried to use gmirror + geom gated as a mirroring transport, but something strange happens with gmirror when ggate device looses connection with the dead controller. Yes, gmirror detects that remote ggate is detached, but it doesn’t want to drop it and continues to wait for something!

Finally, I replaced it with legacy iSCSI port – istgt. It works quite stable, but sometimes drops connection. Fortunately, gmirror detects it well and it’s possible to restore ZIL-mirroring on the fly.

So, now there are two different iSCSI stacks in the BeaST! 🙂 And I’m not a half way to make everything work, as it seems the gmirror sometimes is a cause of kernel panic.

UPD 2017.05.14:

gmirror_kernelpanic

So “gmirror + something” chains are not very stable for mirroring ZIL. And yes, HAST doesn’t look sutable for the BeaST purposes it it’s based on ggate and it creates one-way replication. Also I’d prefer to keep same device names on both sides of replication.

Let’s see  if “shared drive” for ZIL make everything more stable.

Posted in BeaST | Tagged , , , , , , , , | Leave a comment