Popular Posts

Search This Blog

Wednesday, January 29, 2014

Recursive and Iterative approach for fibonacci in python

#!/usr/bin/python

### Recursive approach for fibonacci series
def fibo_recur(n):
    if n==0:
        return 0
    elif n==1:
        return 1
    else:
        return fibo_recur(n-1) + fibo_recur(n-2)

### Iterative approach for fibonacci series
def fibo_iter(n):
    a=1
    b=1
    if n==0:
        return 0
    while n>=3:
        c=a+b
        a=b
        b=c
        n=n-1
    return b

recur_result=fibo_recur(3)
print recur_result

iter_result=fibo_iter(3)
print iter_result

Tuesday, January 28, 2014

Jenkins - A Centralized Tool to run your Automated Tests

Jenkins is an award-winning application that monitors executions of repeated jobs, such as building a software project or jobs run by cron. Among those things, current Jenkins focuses on the following two jobs:

    •    Building/testing software projects continuously, just like CruiseControl or DamageControl. In a nutshell, Jenkins provides an easy-to-use so-called continuous integration system, making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. The automated, continuous build increases the productivity.
    •    Monitoring executions of externally-run jobs, such as cron jobs and procmail jobs, even those that are run on a remote machine. For example, with cron, all you receive is regular e-mails that capture the output, and it is up to you to look at them diligently and notice when it broke. Jenkins keeps those outputs and makes it easy for you to notice when something is wrong.


Jenkins is an open-source continuous integration software tool written in the Javaprogramming language for testing and reporting on isolated changes in a larger code base in real time. The software enables developers to find and solve defects in a code base rapidly and to automate testing of their builds.

Continuous integration has evolved since its conception. Originally, a daily build was the standard. Now, the usual rule is for each team member to submit work on a daily (or more frequent) basis and for a build to be conducted with each significant change. When used properly, continuous integration provides various benefits, such as constant feedback on the status of the software. Because CI detects deficiencies early on in development, defects are typically smaller, less complex and easier to resolve.

Jenkins is a fork of a project called Hudson, which is trademarked by Oracle and is currently being developed parallel to Jenkins. The development community and its governing body hosts open meetings about the software.

Most of the times Test team will end up writing different test harnesses to test different components and there should be  a platform from which a user can start a run of specific automation suite on a specified environment with given configurations remotely using a web page.

Features
    •    Single Sign-on for all your Automation Needs
    •    Common Console for starting any Automation Run
    •    Configure all your Automation Suites from a single place
    •    Horizontal coverage by adding all automation suites
    •    Vertical dig down possible for an Automation Engineer to understand root cause for Test failures
    •    Email Notification once run in completed
    •    Confluence Page is updated with results/custom message once run is completed
    •    Code Coverage can be published

Performance Testing vs Load Testing vs Stress Testing - Examples

Performance testing - It is performed to evaluate the performance of components of a particular system in a specific situation. It very wide term. It includes: Load Testing, Stress Testing, capacity testing, volume testing, endurance testing, spike testing, scalability testing and reliability testing etc. This type of testing generally does not give pass or fail. It is basically done to set the benchmark & standard of the application against Concurrency / Throughput, Server response time, Latency, Render response time etc. In other words, you can say it is technical & formal evaluation for responsiveness, speed, scalability and stability characteristics.

Load Testing is subset of performance testing. It is done by constantly increasing the load on the application under test till the time it reaches the threshold limit. The main goal of load testing is to identify the upper limit of the system in terms of database, hardware and network etc. The common goal of doing the load testing is to set the SLAs for the application.

Example of load testing can be:
Running multiple applications on a computer simultaneously - starting with one application, then start second application, then third and so on....Now see the performance of your computer.
Endurance test is also a part of load testing which used to calculate metrics like Mean Time Between Failure and Mean Time to Failure.

Load Testing helps to determine:
    •    Throughput
    •    Peak Production Load
    •    Adequacy of H/W environment
    •    Load balancing requirements
    •    How many users application can handle with optimal performance results
    •    How many users hardware can handle with optimal performance results

Stress testing - It is done to evaluate the application's behaviour beyond normal or peak load conditions. It is basically testing the functionality of the application under high loads. Normally these are related to synchronization issues, memory leaks or race conditions etc. Some testing experts also call it as fatigue testing. Sometimes, it becomes difficult to set up a controlled environment before running the test. Example of Stress testing is:

A banking application can take a maximum user load of 20000 concurrent users. Increase the load to 21000 and do some transaction like deposit or withdraw. As soon as you did the transaction, banking application server database will sync with ATM database server. Now check with the user load of 21000 does this sync happened successfully. Now repeat the same test with 22000 thousand concurrent users and so on.

Spike test is also a part of stress testing which is performed when application is loaded with heavy loads repeatedly and increase beyond production operations for short duration.
Stress Testing helps to determine:
    •    Errors in slowness & at peak user loads
    •    Any security loop holes with over loads
    •    How the hardware reacts with over loads
    •    Data corruption issues at over loads

Monday, January 27, 2014

What does `if __name__ == “__main__”:` do? in python

A module's __name__
Every module has a name and statements in a module can find out the name of its module. This is especially handy in one particular situation - As mentioned previously, when a module is imported for the first time, the main block in that module is run. What if we want to run the block only if the program was used by itself and not when it was imported from another module? This can be achieved using the __name__ attribute of the module.

Using a module's __name__

Example Using a module's __name__
               
#!/usr/bin/python
# Filename: using_name.py

if __name__ == '__main__':
    print 'This program is being run by itself'
else:
    print 'I am being imported from another module'
                               
Output
               
$ python using_name.py
This program is being run by itself

$ python
>>> import using_name
I am being imported from another module
>>>
                            
How It Works
Every Python module has it's __name__ defined and if this is '__main__', it implies that the module is being run standalone by the user and we can do corresponding appropriate actions.


When the Python interpreter reads a source file, it executes all of the code found in it. Before executing the code, it will define a few special variables. For example, if the python interpreter is running that module (the source file) as the main program, it sets the special __name__ variable to have a value"__main__". If this file is being imported from another module, __name__ will be set to the module's name.

In the case of your script, let's assume that it's executing as the main function, e.g. you said something like

python threading_example.py

on the command line. After setting up the special variables, it will execute the import statement and load those modules. It will then evaluate the def block, creating a function object and creating a variable called myfunction that points to the function object. It will then read the if statement and see that __name__ does equal "__main__", so it will execute the block shown there.

One of the reasons for doing this is that sometimes you write a module (a .py file) where it can be executed directly. Alternatively, it can also be imported and used in another module. By doing the main check, you can have that code only execute when you want to run the module as a program and not have it execute when someone just wants to import your module and call your functions themselves.

When your script is run by passing it as a command to the Python interpreter,
python myscript.py

all of the code that is at indentation level 0 gets executed. Functions and classes that are defined are, well, defined, but none of their code gets ran. Unlike other languages, there's no main() function that gets run automatically - the main() function is implicitly all the code at the top level.

In this case, the top-level code is an if block. __name__ is a built-in variable which evaluate to the name of the current module. However, if a module is being run directly (as in myscript.py above), then __name__ instead is set to the string "__main__". Thus, you can test whether your script is being run directly or being imported by something else by testing

if __name__ == "__main__":
    ...
If that code is being imported into another module, the various function and class definitions will be imported, but the main() code won't get run. As a basic example, consider the following two scripts:

# file one.py
def func():
    print("func() in one.py")

print("top-level in one.py")

if __name__ == "__main__":
    print("one.py is being run directly")
else:
    print("one.py is being imported into another module")

# file two.py
import one

print("top-level in two.py")
one.func()

if __name__ == "__main__":
    print("two.py is being run directly")
else:
    print("two.py is being imported into another module")

Now, if you invoke the interpreter as
python one.py

The output will be

top-level in one.py
one.py is being run directly

If you run two.py instead:
python two.py

You get

top-level in one.py
one.py is being imported into another module
top-level in two.py
func() in one.py
two.py is being run directly

Thus, when module one gets loaded, its __name__ equals "one" instead of __main__.





Friday, January 24, 2014

Difference between Depth First Search and Pre-Order Traversal

DFS is an algorithm to search a hierarchical structure. But, pre-order traversal seems to be something similar also. So, what is the difference between the two??



DFS says:
    •    If element found at root, return success.
    •    If root has no descendants, return failure
    •    Recursive DFS on left subtree: success if element found
    •    If not, Recursive DFS on right subtree: success if element found

Pre-order Traversal
    •    Visit the root
    •    Recursive pre-order on left subtree
    •    Recursive pre-order on right subtree

What are the differences then??
    •    DFS is a search i.e. it stops when it finds its target element. Pre-order traversal is a traversal and not a search i.e. it visits all the elements in the tree.

11 Common Web Use Cases Solved In Redis

Common Redis primitives like LPUSH, and LTRIM, and LREM are used to accomplish tasks programmers need to get done, but that can be hard or slow in more traditional stores. A very useful and practical article. How would you accomplish these tasks in your framework?

    1.    Show latest items listings in your home page. This is a live in-memory cache and is very fast. LPUSH is used to insert a content ID at the head of the list stored at a key. LTRIM is used to limit the number of items in the list to 5000. If the user needs to page beyond this cache only then are they sent to the database.

    2.    Deletion and filtering. If a cached article is deleted it can be removed from the cache using LREM.

    3.    Leaderboards and related problems. A leader board is a set sorted by score. TheZADD commands implements this directly and the ZREVRANGE command can be used to get the top 100 users by score and ZRANK can be used to get a users rank. Very direct and easy.

    4.    Order by user votes and time. This is a leaderboard like Reddit where the score is formula the changes over time. LPUSH + LTRIM are used to add an article to a list. A background task polls the list and recomputes the order of the list and ZADD is used to populate the list in the new order. This list can be retrieved very fast by even a heavily loaded site. This should be easier, the need for the polling code isn't elegant.

    5.    Implement expires on items. To keep a sorted list by time then use unix time as the key. The difficult task of expiring items is implemented by indexing current_time+time_to_live. Another background worker is used to make queries using ZRANGE ... with SCORES and delete timed out entries.

    6.    Counting stuff. Keeping stats of all kinds is common, say you want to know when to block an IP addresss. The INCRBY command makes it easy to atomically keep counters; GETSET to atomically clear the counter; the expire attribute can be used to tell when an key should be deleted.

    7.    Unique N items in a given amount of time. This is the unique visitors problem and can be solved using SADD for each pageview. SADD won't add a member to a set if it already exists.

    8.    Real time analysis of what is happening, for stats, anti spam, or whatever. Using Redis primitives it's much simpler to implement a spam filtering system or other real-time tracking system.

    9.    Pub/Sub. Keeping a map of who is interested in updates to what data is a common task in systems. Redis has a pub/sub feature to make this easy using commands likeSUBSCRIBE, UNSUBSCRIBE, and PUBLISH.

    10.    Queues. Queues are everywhere in programming. In addition to the push and pop type commands, Redis has blocking queue commands so a program can wait on work being added to the queue by another program. You can also do interesting things implement a rotating queue of RSS feeds to update.

    11.    Caching. Redis can be used in the same manner as memcache.

The take home is to not endlessly engage in model wars, but see what can be accomplished by composing powerful and simple primitives together. Certainly you can write specialized code to do all these operations, but Redis makes it much easier to implement and reason about.

In-Memory Caching

The in-memory caching system is designed to increase application performance by holding frequently-requested data in memory, reducing the need for database queries to get that data.

The caching system is optimized for use in a clustered installation, where you set up and configure a separate external cache server. In a single-machine installation, the application will use a local cache in the application's server's process, rather than a cache server.

Parts of the In-Memory Caching System

In a clustered installation, caching system components interoperate with the clustering system to provide fast response to client requests while also ensuring that cached data is available to all nodes in the cluster.

Application server. The application manages the relationship between user requests, the near cache, the cache server, and the database.

Near cache. Each application server has its own near cache for the data most recently requested from that cluster node. The near cache is the first place the application looks, followed by the cache server, then the database.

Cache server. The cache server is installed on a machine separate from application server nodes in the cluster. It's available to all nodes in the cluster (in fact, you can't create a cluster without declaring the address of a cache server).

Local cache. The local cache exists mainly for single-machine installations, where a cache server might not be present. Like the near cache, it lives with the application server. The local cache should only be used for single-machine installations or for data that should not be available to other nodes in a cluster. An application server's local cache does not participate in synchronization across the cluster.

Clustering system. The clustering system reports near cache changes across the application server nodes. As a result, although data is not fully replicated across nodes, all nodes are aware when the content of their near caches must be updated from the cache server or the database.

How In-Memory Caching Works
For typical content retrievals, data is returned from the near cache (if the data has been requested recently from the current application server node), from the cache server (if the data has been recently requested from another node in the cluster), or from the database (if the data is not in a cache).

Data retrieved from the database is placed into a cache so that subsequent retrievals will be faster.

Here's an example of how changes are handled:

    1.    Client makes a change, such as an update to a user profile. Their change is made through node A of the cluster, probably via a load balancer.
    2.    The node A application server writes the change to the application database.
    3.    The node A app server puts the newly changed data into its near cache for fast retrieval later.
    4.    The node A app server puts the newly changed data to the cache server, where it will be found by other nodes in the cluster.
    5.    Node A tells the clustering system that the contents of its near cache have changed, passing along a list of the changed cache items. The clustering system collects change reports and regularly sends them in a batch to other nodes in the cluster. Near caches on the other nodes drop any entries corresponding to those in the change list.
    6.    When the node B app server receives a request for the data that was changed, and which it has removed from its near cache, it looks to the cache server.
    7.    Node B caches the fresh data in its own near cache.

Cache Server Deployment Design
In a clustered configuration, the cache server should be installed on a machine separate from the clustered application server nodes. That way, the application server process is not contending for CPU cycles with the cache server process. It is possible to have the application server run with less memory than in a single-machine deployment design. Also note that it is best if the cache servers and the application servers are located on the same network switch. This will help reduce latency between the application servers and the cache servers.

Choosing the Number of Cache Server Machines
A single dedicated cache server with four cores can easily handle the cache requests from up to six application server nodes running under full load. All cache server processes are monitored by a daemon process which will automatically restart the cache server if the JVM fails completely. Currently, multiple cache servers are not supported for a single installation.
In a cluster, the application will continue to run even if all cache servers fail. However, performance will degrade significantly because requests previously handled via the cache will be transferred to the database, increasing its load significantly.

Adjusting Cache-Related Memory

Adjusting Near Cache Memory
The near cache, which runs on each application server node, starts evicting cached items to free up memory once the heap reaches 75 percent of the maximum allowed size. When you factor in application overhead and free space requirements to allow for efficient garbage collection, a 2GB heap means that the typical amount of memory used for caching will be no greater than about 1GB.
For increased performance (since items cached in the near cache are significantly faster to retrieve than items stored remotely on the cache server) larger sites should increase the amount of memory allocated to the application server process. To see if this is the case, you can watch the GC logs (or use a tool such as JConsole orVisualVM after enabling JMX), noting if the amount of memory being used never goes below about 70 percent even after garbage collection occurs.

Adjusting Cache Server Memory
The cache server process acts similarly to the near cache. However, it starts eviction once the heap reaches 80 percent of the maximum amount. On installations with large amounts of content, the default 1GB allocated to the cache server process may not be enough and should be increased.
To adjust the amount of memory the cache server process will use, edit the /etc/jive/conf/cache.conf file and uncomment the following two lines and set them to new values:
#JVM_HEAP_MIN='1024'
#JVM_HEAP_MAX='1024'
Make sure to set the min and the max to the same value -- otherwise, evictions may occur prematurely. If you need additional cache server memory, recommended values are 2048 (2GB) or 4096 (4GB). You'll need to restart the cache server for this change to take effect.

Why you should almost always choose Redis as your database

Whenever the topic of databases/persistence arises, I almost always recommend using Redis instead of MySQL or even any other NoSQL solution.

There are two reasons for choosing Redis almost always:
1) Redis data structures are far more intuitive and versatile means of storing data than relational databases.
To me, relational databases are a very limiting and antinatural way of structuring data. I’ve always felt that mapping the concepts of your program (whatever your style of programming, but especially if it is object-oriented or functional) to relational databases is both painful and frustrating. This is for two reasons:

- Relational databases have no concept of hierarchy – that is, no nesting. What you have is a set of arrays, instead of having a tree. There’s nothing bigger than a table, and nothing smaller than a field.

- The links between nodes are of a weak and limited type: foreign keys. So you have to bend over backwards to implement some sort of network model for your data.

(BTW, this is why ORM is a rabbit hole of the kind that nothing of real beauty can come from. No matter how good the solution is, it’s always a variant of fitting a square peg in a round hole).

Redis, although it isn’t a true tree or a graph, is far closer to either of them, because it has a rich set of data structures which are very similar to those of today’s high level programming languages. From what I’ve looked around, no other NoSQL tool offers a comparable set of data structures.

This means you’ll do far less violence to the concepts of your program when you persist its data with Redis. This makes for faster development and will considerably improve the quality of your code. More importantly, your code will be more beautiful.

2) Redis runs in RAM
Although it persists to disk, Redis data is read from and written to RAM. Since RAM is about an order of magnitude faster than a disk, this translates to queries and write operations that are roughly an order of magnitude faster. Sure, many caveats and exceptions apply, but that’s the essence of Redis’ blazing performance.

So, to sum up:
Redis will make your application 1) easier and more enjoyable to program, because it maps better to the concepts of your program; and 2) faster.
Yet…

You should not use Redis if your dataset is large (more than 2gb).
This is because it is non-trivial (though possible) to create a cluster of Redis instances, each of them holding up to 2gb (or 4, or 8). Also, if your application stores larges volumes of data, then probably Redis will never be your option because of economics (can you afford terabytes of RAM?). In that case, you should give a deep, meaningful look to Amazon S3.

(Did you notice I’m implying you should never use MySQL?)

These counterarguments to using Redis are invalid:
MySQL is the default and Redis is not production-ready: there’s much to argue against using the default technological choice for anything – and unless your clients insist of vanilla-grade software, you should seek something better than the median tool. And Redis is very, very production ready. Just look around and see who’s using it.

- Redis is not truly persistent because it runs in RAM: both of Redis’ persistence operations (journal and snapshotting) are good enough. For me, the ideal would be to have a reverse journal, where you store the negative changes (what you should apply to goback instead of starting from 0 and going forwards) – if you combine this with snapshotting, you’d have something that’s virtually lossless and fast. But going back to the main point, Redis persistence to disk is secure and reasonably fast.

Thursday, January 23, 2014

Why Redis is a Great Tool for New Applications and Startups

While it has been proven time and time again that open source databases and technology are ideal for startups and application developers due, in large, to the potentially unlimited contributors that aid in perfecting the code, when it comes time to choose a database within that open source software, what makes one stand out over the other?

Open source Redis is one of the top three databases used by new applications today. According to a survey of database users by 451 Research, Redis adoption is projected to increase from 11.3 percent to 15.9 percent in 2015. It is clear that Redis is taking off as a leading in-memory database solution, but what is it, exactly, that makes Redis so attractive to startups and application developers alike?

Redis’ popularity is due, largely, to its combination of high-performance, attractive data types, and commands that simplify application development. As new companies and applications emerge, they demand scalable high-performance databases to keep up with the exponential growth of their data.

Redis’ unique characteristics have resulted in tremendous adoption rates—making it a database of choice for many leading companies. For example, Pinterest uses Redis for the “follower graph,” which is a breakdown of who is following whom and Twitter uses Redis for its home timeline. Redis is especially well suited to new companies and applications for several key reasons.

Top performance
Redis is entirely served from RAM, which makes it faster than any other datastore (most of which are served from disk) by an order of magnitude. Furthermore, it has a simple, single-process, event-driven design, meaning it does not have to deal with lock mechanisms like other databases do, which hinder performance for many applications. The diagram below presents benchmark tests carried out for several leading databases.

Simplified application development
Developing new applications with Redis is way simpler, more intuitive and faster than other databases, including MySQL. Redis has a rich set of data structures, which are very similar to those of today’s high level programming languages that are increasingly used by application developers. The code used to build the data structures of Redis, like sets, lists, sorted lists, etc., allows users to perform really complex tasks very easily. It also offers transactions that allow users to plan multiple commands, making it thread-safe.

Conclusion
With Redis, developers do far less damage to the concepts of their programs, resulting in faster development, improved code quality and more beautiful code. This, combined with its top performance, it’s no wonder why Redis’ popularity is soaring.

REST vs. SOAP – The Right WebService

Although, in alst ffew years we saw growth of large no.  of Web Services, despite that the hype surrounding the SOAP has barely reduced. Internet architects have come up with a surprisingly good argument for pushing SOAP aside: there’s a better method for building Web services in the form of Representational State Transfer (REST).

REST is more of an old philosophy than a new technology. But a realization that came later in technology. Whereas SOAP looks to jump-start the next phase of Internet development with a host of new specifications, the REST philosophy espouses that the existing principles and protocols of the Web are enough to create robust Web services. This means that developers who understand HTTP and XML can start building Web services right away, without needing any toolkits beyond what they normally use for Internet application development.

In a RESTful architecture, the key resources are identified — Can be entities, collections, or anything else the designer seems worthy of having its own URI. The standard methods — in this case, the HTTP verbs — are mapped to resource-specific semantics. All resources implement the  same uniform interface. The dimension of content-types, which allows for different representations of resources (e.g. in both XML, HTML, and plain text), as well as the possibility of links to resources in resource representations. Use your imagination — e.g. the GET on /customer/4711 would return a document that contains a link to a specific /order/xyz.
I am seeing a lot of new web services are implemented using a REST style architecture these days rather than a SOAP one. Lets step back a second and put some light on what REST is.

What is a REST Web Service
Representational State Transfer or REST basically means that each unique URL is a representation of some object. You can get the contents of that object using an HTTP GET, to delete it, you then might use a POST, PUT, or DELETE to modify the object (in practice most of the services use a POST for this).

How Popular is REST?
All of the major webservices on the Internet now use REST: Twitter, Yahoo’s web services use REST, others include Flickr, del.icio.us, pubsub, bloglines, technorati, and several others. Both eBay and Amazon have web services for both REST and SOAP.

and  SOAP?
SOAP is mostly used for Enterprise applications to integrate wide types and no. of applications and another trend is to integrate with legacy systems, etc. On the Internet side of things — Google is consistent in implementing their web services using SOAP, with the exception of Blogger, which uses XML-RPC.

REST vs SOAP
The companies that use REST APIs haven’t been around for very long, and their APIs came out this year or last year mostly. So REST is definitely In-Vogue  for creating a web service.  But, lets face it — Use SOAP to wash, and you REST when your tired). The main advantages of REST web services are:
        Lightweight – not a lot of extra xml markup
        Human Readable Results
        Easy to build – no toolkits required

SOAP also has some advantages:
        Easy to consume – sometimes
        Rigid – type checking, adheres to a contract
        Development tools

Is SOAP Simple Object access really that  simple ? I guess a misnomer!

Let’s discuss all the point of comparisons –

API Flexibility & Simplicity

The key to the REST methodology is to write Web services using an interface that is already well known and widely used: the URI. For example, exposing a currency converter service, in which a user enters a currency quote symbol to return a real-time target currency price, could be as simple as making a script accessible on a Web server via the following URI: http://www.ExampleCurrencyBrokerage.com/convert?=us-dollar&value=100&target=pound
Any client or server application with HTTP support could easily call that service with an HTTP GET command. Depending on how the service provider wrote the script, the resulting HTTP response might be as simple as some standard headers and a text string containing the current price for the given ticker symbol. Or, it might be an XML document.
This interface method has significant benefits over SOAP-based services. Any developer can figure out how to create and modify a URI to access different Web resources. SOAP, on the other hand, requires specific knowledge of a new XML specification, and most developers will need a SOAP toolkit to form requests and parse the results.

Bandwidth Usage – REST is Lighter
Another benefit of the RESTful interface is that requests and responses can be short. SOAP requires an XML wrapper around every request and response. Once namespaces and typing are declared, a four- or five-digit stock quote in a SOAP response could require more than 10 times as many bytes as would the same response in REST.
SOAP proponents argue that strong typing is a necessary feature for distributed applications. In practice, though, both the requesting application and the service know the data types ahead of time; thus, transferring that information in the requests and responses is gratuitous.
How does one know the data types—and their locations in the response—ahead of time? Like SOAP, REST still needs a corresponding document that outlines input parameters and output data. The good part is that REST is flexible enough that developers could write WSDL files for their services if such a formal declaration was necessary. Otherwise, the declaration could be as simple as a human-readable Web page that says, “Give this service an input of some stock ticker symbol, in the format q=symbol, and it will return the current price of one share of stock as a text string.”

Security

Probably the most interesting aspect of the REST vs. SOAP debate is the security perspective. Although the SOAP camp insists that sending remote procedure calls (RPC) through standard HTTP ports is a good way to ensure Web services support across organizational boundaries. However,  REST followers argue that the practice is a major design flaw that compromises network safety. REST calls also go over HTTP or HTTPS, but with REST the administrator (or firewall) can discern the intent of each message by analyzing the HTTP command used in the request. For example, a GET request can always be considered safe because it can’t, by definition, modify any data. It can only query data.
A typical SOAP request, on the other hand, will use POST to communicate with a given service. And without looking into the SOAP envelope—a task that is both resource-consuming and not built into most firewalls—there’s no way to know whether that request simply wants to query data or delete entire tables from the database.

As for authentication and authorization, SOAP places the burden in the hands of the application developer. The REST methodology instead takes into account the fact that Web servers already have support for these tasks. Through the use of industry-standard certificates and a common identity management system, such as an LDAP server, developers can make the network layer do all the heavy lifting.
This is not only helpful to developers, but it eases the burden on administrators, who can use something as simple as ACL files to manage their Web services the same way they would any other URI.

REST ain’t Perfect

To be wise, REST ain’t perfect. It isn’t the best solution for every Web service. Data that needs to be secure should never be sent as parameters in URIs. And large amounts of data, like that in detailed purchase orders (POs), can quickly become cumbersome or even out of bounds within a URI.
And when It comes to attachments, SOAP is a solid winner. SOAP can transport your all text adn BINaries without a glitch. In such cases, SOAP is indeed a solid solution. But it’s important to try REST first and resort to SOAP only when necessary. This helps keep application development simple and accessible.
Fortunately, the REST philosophy is catching on with developers of Web services. The latest version of the SOAP specification now allows certain types services to be exposed through URIs (although the response is still a SOAP message). Similarly, users of Microsoft .NET platform can publish services so that they use GET requests. All this signifies a shift in thinking about how best to interface Web services.
Developers need to understand that sending and receiving a SOAP message isn’t always the best way for applications to communicate. Sometimes a simple REST interface and a plain text response does the trick—and saves time and resources in the process.

HTTP vs REST vs SOAP

I have been an active proponent of SOAP since its inception. SOAP revolutionzed RPC and loose coupling to a great extent. However off late I have been giving APIs and interfaces considerable thought and am leaning a lot more towards simple HTTP based APIs with an XML or JSON response format as opposed to SOAP. In this post I pen down some random thoughts on the merits and demerits of each.

Introduction

Let me first clarify the terminology -
  •         SOAP refers to Simple Object Access Protocol
  •         HTTP based APIs refer to APIs that are exposed as one or more HTTP URIs and typical responses are in XML / JSON. Response schemas are custom per object
  •         REST on the other hand adds an element of using standrdized URIs, and also giving importance to the HTTP verb used (ie GET / POST / PUT etc)

Typing
SOAP provides relatively stronger typing since it has a fixed set of supported data types. It therefore guarantees that a return value will be available directly in the corresponding native type in a particular platform. Incase of HTTP based APIs the return value needs to be de-serialized from XML, and then type-casted. This may not represent much effort, especially for dynamic languages. Infact, even incase of copmlex objects, traversing an object is very similar to traversing an XML tree, so there is no definitive advantage in terms of ease of client-side coding.

Client-side effort

Making calls to an HTTP API is significantly easier than making calls to a SOAP API. The latter requires a client library, a stub and a learning curve. The former is native to all programming languages and simply involves constructing an HTTP request with appropriate parameters appended to it. Even psychologically the former seems like much less effort.

Testing and Troubleshooting

It is also easy to test and troubleshoot an HTTP API since one can construct a call with nothing more than a browser and check the response inside the browser window itself. No troubleshooting tools are required to generate a request / response cycle. In this lies the primary power of HTTP based APIs

Server-side effort

Most Programming languages make it extremely easy to expose a method using SOAP. The serialization and deserialization is handled by the SOAP Server library. To expose an object’s methods as an HTTP API can be relatively more challenging since it may require serialization of output to XML. Making the API Rest-y involves additional work to map URI paths to specific handlers and to import the meaning of the HTTP request in the scheme of things. Offcourse many frameworks exist to make this task easier. Nevertheless, as of today, it is still easier to expose a set of methods using SOAP than it is to expose them using regular HTTP.

Caching

Since HTTP based / Rest-ful APIs can be consumed using simple GET requests, intermediate proxy servers / reverse-proxies can cache their response very easily. On the other hand, SOAP requests use POST and require a complex XML request to be created which makes response-caching difficult

Conclusions

In the end I believe SOAP requires greater implementation effort and understanding on the client side while HTTP based or REST based APIs require greater implementation effort on the server side. API adoption can increase considerably if a HTTP based interface is provided. Infact an HTTP-based API with XML/JSON responses represents the best of both breeds and is easy to implement on the server as well as easy to consume from a client

Sunday, January 19, 2014

Sorting Methods Implementation in Python

INSERTION SORT
###################  Start of the Program

#!/usr/bin/python

import sys

def insertionSort(alist):
   for index in range(1,len(alist)):

     currentvalue = alist[index]
     position = index

     while position>0 and alist[position-1]>currentvalue:
         alist[position]=alist[position-1]
         position = position-1

     alist[position]=currentvalue

alist = [54,26,93,17,77,31,44,55,20]
insertionSort(alist)
print(alist)

####################  End of the Program

SELECTION SORT
#####################  Start of the Program

#!/usr/bin/python

import sys

def selectionSort(alist):
   for fillslot in range(len(alist)-1,0,-1):
       positionOfMax=0
       for location in range(1,fillslot+1):
           if alist[location]>alist[positionOfMax]:
               positionOfMax = location

       temp = alist[fillslot]
       alist[fillslot] = alist[positionOfMax]
       alist[positionOfMax] = temp

alist = [54,26,93,17,77,31,44,55,20]
selectionSort(alist)
print(alist)

#####################  End of the Program

Saturday, January 18, 2014

API to fetch user details from Twitter

          ******************* (First Program with name oauth1.py) *****************

#######################  Start of the Program

#!/usr/bin/python

import time
import random
import urllib

# crypto imports
import base64
import hmac
from hashlib import sha1

# this style is from python urllib implementation
nonce_alphabet = ('ABCDEFGHIJKLMNOPQRSTUVWXYZ'
                  'abcdefghijklmnopqrstuvwxyz'
                  '0123456789')

def encode(s):
    """encode a url component"""
    return urllib.quote(s, "~")

def now():
    """returns the current unix timestamp"""
    epoch = int(time.time())
    return str(epoch)

def nonce():
    """returns a 24 letter nonce"""
    choices = []
    choices = [random.choice(nonce_alphabet) for i in xrange(24)]
    return ''.join(choices)

def authorization_header(token, method, url, query={}, post_query={}):
    """returns the header value for key Authorization (oauth impl)"""

    # build basic oauth variables
    oauth_consumer_key = token['oauth_consumer_key']
    oauth_nonce = nonce()
    oauth_signature_method = 'HMAC-SHA1'
    oauth_timestamp = now()
    oauth_token = token['oauth_token']
    oauth_version = '1.0'

    # compute signature
    dict = {
        'oauth_consumer_key': oauth_consumer_key,
        'oauth_nonce': oauth_nonce,
        'oauth_signature_method': oauth_signature_method,
        'oauth_timestamp': oauth_timestamp,
        'oauth_token': oauth_token,
        'oauth_version': oauth_version
    }
    dict.update(query)
    dict.update(post_query)
    param_str = urllib.urlencode(sorted(dict.iteritems())) # important step
    key = "{0}&{1}".format(token['oauth_consumer_secret'],
                           token['oauth_token_secret'])
    msg = "&".join(map(encode, [method, url, param_str]))
    m = hmac.new(key, msg, sha1)
    digest = m.digest()
    digestb64 = base64.b64encode(digest)
    oauth_signature = encode(digestb64)

    # build header
    auth_items = []
    auth_items.append('oauth_consumer_key="' + oauth_consumer_key + '"')
    auth_items.append('oauth_nonce="' + oauth_nonce + '"')
    auth_items.append('oauth_signature="' + oauth_signature + '"')
    auth_items.append('oauth_signature_method="' + oauth_signature_method + '"')
    auth_items.append('oauth_timestamp="' + oauth_timestamp + '"')
    auth_items.append('oauth_token="' + oauth_token + '"')
    auth_items.append('oauth_version="' + oauth_version + '"')

    return "OAuth " + ",".join(auth_items)

if __name__ == "__main__":
    input = 'k1=v1&k2=v2'
    print("encode '" + input + "': " + encode(input))

    print("now: " + now())

    print("a nonce: " + nonce())

    token = {
        'oauth_consumer_key': 'CON-KEY',
        'oauth_consumer_secret': 'CON-S3KR3T',
        'oauth_token': 'TOK',
        'oauth_token_secret': 'TOK-S3KR3T'
    }
    print("auth header: " + authorization_header(token, 'GET',
                                                 'https://localhost'))

#######################  End of the Program



       ************************** (Second Program which will use oauth1.py) ***************

####################  Start of the Program

#!/usr/bin/python

import os
import sys
import urllib
import urllib2
import json
#from simplejson import *
import logging

#sys.path.append(os.path.join('..', '..', 'main', 'python'))
import oauth1

#file_name=sys.argv[1]

# Path I used here to print result in file in my lappi  folder. You can specify your own path
# or can also ignore if don't want to write in some output file
path='/home/nitin/Nitin/Python'

logging.basicConfig(filename='twitter.log', level=logging.INFO)
logging.info("initialized logging")

def fetch(name):
    logging.info("fetching user details for '" + str(name) + "'")

    url = 'https://api.twitter.com/1.1/users/show.json'
    url_params = {'screen_name': name}

    qs = urllib.urlencode(url_params)
    #url_with_qs = url if len(qs) == 0 else url + "?" + qs
    if len(qs) == 0:
        url_with_qs=url
    else:
        url_with_qs=url + "?" + qs

    req = urllib2.Request(url_with_qs)
    req.add_header('Accept', '*/*')
    req.add_header('User-Agent', 'ni-client v0.0.1')
    req.add_header('Authorization', oauth1.authorization_header(token, 'GET', url, url_params))

    try:
        r = urllib2.urlopen(req)
        resp = r.read()
        logging.info("response status code: " + str(r.getcode()))
        print resp
        return resp
    except:
        print "Error while fetching ", name

def fetch_twitter_api():
    file_handles=open(file_name,'r')
    file=open(path + '/screen_firsttokenids.txt','w')
    for handle in file_handles.readlines():
        if handle:
            resp=fetch(handle)
            if resp:
                json_dict=json.loads(resp)
                twi_ID=json_dict['id']
                twi_retweet_count=0
                if json_dict.has_key('status'):
                    if json_dict['status']['retweet_count']:
                        twi_retweet_count=json_dict['status']['retweet_count']
                twi_fav_count=json_dict['favourites_count']
                handle=str(handle).strip()
                print "Screen_name: %s , ID: %s , retweet count: %s, Fav count: %s"% (handle, twi_ID,twi_retweet_count,twi_fav_count)
                output = "\nScreen_name: %s , ID: %s , retweet count: %s, Fav count: %s"% (handle, twi_ID, twi_retweet_count, twi_fav_count)
                #output="\nScreen_name %s , ID %s"% (handle, ID)
                file.write(output)
            else:
                handle=str(handle).strip()
                print "Got Error for Screen_name %s"% (handle)
                output="\nGot Error for Screen_name %s"% (handle)
                file.write(output)
    file.close()


if __name__ == "__main__":
    token = {
        'oauth_consumer_key': 'o5XSukmMltriI3O78BgRw',
        'oauth_consumer_secret': 'tuNfB4MpagDr9Znhg4y9s6x7LDfxtZj0XmSYnoDYo4',
        'oauth_token': '1062703507-p6fMLHSa0C9v9VbtsNzBsRtDdZQNJ2C9yEX4VyN',
        'oauth_token_secret': 'sjxIXvmAtTYsvCdzvpN72sYQQahLmqxlkRqEm4bmI70'
    }
    #print(fetch('syncapse'))
    #print(fetch('samsungtweets'))
    fetch('nitin89syncapse')
    #fetch('_1E')
    #fetch_twitter_api()

#############################  End of the Program

HashTable Implementation in Python

###################  Start of the Program

#!/usr/bin/python

import sys

class HashTable:
    def __init__(self):
        self.size = 11
        self.slots = [None] * self.size
        self.data = [None] * self.size

    def put(self,key,data):
      hashvalue = self.hashfunction(key,len(self.slots))

      if self.slots[hashvalue] == None:
        self.slots[hashvalue] = key
        self.data[hashvalue] = data
      else:
        if self.slots[hashvalue] == key:
          self.data[hashvalue] = data  #replace
        else:
          nextslot = self.rehash(hashvalue,len(self.slots))
          while self.slots[nextslot] != None and \
                          self.slots[nextslot] != key:
            nextslot = self.rehash(nextslot,len(self.slots))

          if self.slots[nextslot] == None:
            self.slots[nextslot]=key
            self.data[nextslot]=data
          else:
            self.data[nextslot] = data #replace

    def hashfunction(self,key,size):
         return key%size

    def rehash(self,oldhash,size):
        return (oldhash+1)%size

    def get(self,key):
      startslot = self.hashfunction(key,len(self.slots))

      data = None
      stop = False
      found = False
      position = startslot
      while self.slots[position] != None and  \
                           not found and not stop:
         if self.slots[position] == key:
           found = True
           data = self.data[position]
         else:
           position=self.rehash(position,len(self.slots))
           if position == startslot:
               stop = True
      return data

    def __getitem__(self,key):
        return self.get(key)

    def __setitem__(self,key,data):
        self.put(key,data)

H=HashTable()
H[54]="cat"
H[26]="dog"
H[93]="lion"
H[17]="tiger"
H[77]="bird"
H[31]="cow"
H[44]="goat"
H[55]="pig"
H[20]="chicken"
print(H.slots)
print(H.data)

print(H[20])

print(H[17])
H[20]='duck'
print(H[20])
print(H[99])

######################  End of the Program

Logic Gates Implementation in Python

##################### Start of the Program

#!/usr/bin/python

class LogicGate:

    def __init__(self,n):
        self.name = n
        self.output = None

    def getName(self):
        return self.name

    def getOutput(self):
        self.output = self.performGateLogic()
        return self.output


class BinaryGate(LogicGate):

    def __init__(self,n):
        LogicGate.__init__(self,n)

        self.pinA = None
        self.pinB = None

    def getPinA(self):
        if self.pinA == None:
            return int(input("Enter Pin A input for gate "+self.getName()+"-->"))
        else:
            return self.pinA.getFrom().getOutput()

    def getPinB(self):
        if self.pinB == None:
            return int(input("Enter Pin B input for gate "+self.getName()+"-->"))
        else:
            return self.pinB.getFrom().getOutput()

    def setNextPin(self,source):
        if self.pinA == None:
            self.pinA = source
        else:
            if self.pinB == None:
                self.pinB = source
            else:
                print("Cannot Connect: NO EMPTY PINS on this gate")


class AndGate(BinaryGate):

    def __init__(self,n):
        BinaryGate.__init__(self,n)

    def performGateLogic(self):

        a = self.getPinA()
        b = self.getPinB()
        if a==1 and b==1:
            return 1
        else:
            return 0

class OrGate(BinaryGate):

    def __init__(self,n):
        BinaryGate.__init__(self,n)

    def performGateLogic(self):

        a = self.getPinA()
        b = self.getPinB()
        if a ==1 or b==1:
            return 1
        else:
            return 0

class UnaryGate(LogicGate):

    def __init__(self,n):
        LogicGate.__init__(self,n)

        self.pin = None

    def getPin(self):
        if self.pin == None:
            return int(input("Enter Pin input for gate "+self.getName()+"-->"))
        else:
            return self.pin.getFrom().getOutput()

    def setNextPin(self,source):
        if self.pin == None:
            self.pin = source
        else:
            print("Cannot Connect: NO EMPTY PINS on this gate")


class NotGate(UnaryGate):

    def __init__(self,n):
        UnaryGate.__init__(self,n)

    def performGateLogic(self):
        if self.getPin():
            return 0
        else:
            return 1


class Connector:

    def __init__(self, fgate, tgate):
        self.fromgate = fgate
        self.togate = tgate

        tgate.setNextPin(self)

    def getFrom(self):
        return self.fromgate

    def getTo(self):
        return self.togate


def main():
   g1 = AndGate("G1")
   g2 = AndGate("G2")
   g3 = OrGate("G3")
   g4 = NotGate("G4")
   c1 = Connector(g1,g3)
   c2 = Connector(g2,g3)
   c3 = Connector(g3,g4)
   print(g4.getOutput())

if __name__=='__main__':
    main()

##################### End of the Program

Printer Implementation with Queue in Python

#!/usr/bin/python

import sys
import random

class Queue:
    def __init__(self):
        self.items = []

    def isEmpty(self):
        return self.items == []

    def enqueue(self, item):
        self.items.insert(0,item)

    def dequeue(self):
        return self.items.pop()

    def size(self):
        return len(self.items)

class Printer:
    def __init__(self, ppm):
        self.pagerate = ppm
        self.currentTask = None
        self.timeRemaining = 0

    def tick(self):
        if self.currentTask != None:
            self.timeRemaining = self.timeRemaining - 1
            if self.timeRemaining <= 0:
                self.currentTask = None

    def busy(self):
        if self.currentTask != None:
            return True
        else:
            return False

    def startNext(self,newtask):
        self.currentTask = newtask
        self.timeRemaining = newtask.getPages() * 60/self.pagerate

class Task:
    def __init__(self,time):
        self.timestamp = time
        self.pages = random.randrange(1,21)

    def getStamp(self):
        return self.timestamp

    def getPages(self):
        return self.pages

    def waitTime(self, currenttime):
        return currenttime - self.timestamp


def simulation(numSeconds, pagesPerMinute):

    labprinter = Printer(pagesPerMinute)
    printQueue = Queue()
    waitingtimes = []

    for currentSecond in range(numSeconds):

      if newPrintTask():
         task = Task(currentSecond)
         printQueue.enqueue(task)

      if (not labprinter.busy()) and (not printQueue.isEmpty()):
        nexttask = printQueue.dequeue()
        waitingtimes.append( nexttask.waitTime(currentSecond))
        labprinter.startNext(nexttask)

      labprinter.tick()

    averageWait=sum(waitingtimes)/len(waitingtimes)
    print("Average Wait %6.2f secs %3d tasks remaining."%(averageWait,printQueue.size()))

def newPrintTask():
    num = random.randrange(1,181)
    if num == 180:
        return True
    else:
        return False

for i in range(10):
    simulation(3600,5)

Write a method to replace all spaces in a string with ‘%20’

################## Start of the Program

#!/usr/bin/python

import sys

given_str=raw_input("Please enter your string: ")

def replace_space():
    c_list=[]
    len_iter=len(given_str)
    i=0
    while len_iter>0:
        if given_str[i] == ' ':
            c_list.append("%20")
        else:
            c_list.append(given_str[i])
        i=i+1
        len_iter=len_iter - 1   
    #print "".join(c_list)
    len_iter1=len(c_list)
    j=0
    while len_iter1>0:
        sys.stdout.write(c_list[j])
        j=j + 1
        len_iter1=len_iter1 - 1
    print " "

if __name__=="__main__":
    replace_space()

################## End pf the Program

Given an image represented by an NxN matrix, where each pixel in the image is 4 bytes, write a method to rotate the image by 90 degrees Can you do this in place?

################# Start of the Program

#!/usr/bin/python

a=[[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]
num_of_rows=len(a)
num_of_col=len(a[0])

def rotate_image():
    b=[ [ None for i in range(num_of_col) ] for j in range(num_of_rows) ]
    for i in range(num_of_col):
        for j in range(num_of_rows):
            b[i][j]=a[(num_of_rows - 1) - j][i]
    print "Given Matrix : ", a
    print "90 Degree Rotated Matrix : ", b
       
if __name__=='__main__':
    rotate_image()

################## End of the Program

Write an algorithm such that if an element in an MxN matrix is 0, its entire row and column is set to 0.

#!/usr/bin/python

a=[[1,2,3],[4,5,6],[7,0,0],[10,9,12]]
#a=[[10,7,4,1],[17,0,5,2],[12,9,6,3]]
num_of_rows=len(a)
num_of_col=len(a[0])

#### Using dictionary here but it is a bug because dict replacing the value if same key found :-)
def matrix_wd_zero():
    m_dict={}
    print "Given Matrix : ", a
    for i in range(num_of_rows):
        for j in range(num_of_col):
            m_value=a[i][j]
            if m_value == 0:
                m_dict[i]=j
    for i,j in m_dict.items():
        for k in range(num_of_col):
            a[i][k]=0
        for p in range(num_of_rows):
            a[p][j]=0
    print "Result Matrix : ", a


############ Efficient and correct Function
def matrix_wd_zero2():
    print "Given Matrix : ", a
    for i in range(num_of_rows):
        for j in range(num_of_col):
            if a[i][j]==0:
                for k in range(num_of_col):
                    a[i][k]=100
                for p in range(num_of_rows):
                    a[p][j]=100
    for i in range(num_of_rows):
        for j in range(num_of_col):
            if a[i][j]==100:
                a[i][j]=0
    print "Result Matrix : ", a

if __name__=='__main__':
    #matrix_wd_zero()
    matrix_wd_zero2()

Thursday, January 9, 2014

Linked List in Python

############################## Start of the Program

#!/usr/bin/python

class Node:
    def __init__(self,initdata):
        self.data=initdata
        self.next=None
    def getData(self):
        return self.data
    def setData(self,item):
        self.data=item
    def getNext(self):
        return self.next
    def setNext(self,item):
        self.next=item

class UnorderedList:
    def __init__(self):
        self.head=None

    def isEmpty(self):
        return self.head==None

    def add(self,item):
        temp=Node(item)
        temp.setNext(self.head)
        self.head=temp

    def size(self):
        current=self.head
        count=0
        while current!=None:
            count=count+1
            current=current.getNext()
        return count

    def search(self,item):
        current=self.head
        found=False
        while current!=None and not found:
            if current.getData()==item:
                found=True
            else:
                current=current.getNext()
        return found

    def remove(self,item):
        current=self.head
        prev=None
        found=False
        while current!=None and not found:
            if current.getData()==item:
                found=True
            else:
                prev=current
                current=current.getNext()
        if prev==None:
            self.head=current.getNext()
        else:
            prev.setNext(current.getNext())

########################### End of the Program

Wednesday, January 8, 2014

Program to check anagram strings with O(n)

####################### Start of the Program

#!/usr/bin/python

first_str=raw_input("Please input first string: ")
second_str=raw_input("Please input second string: ")

def validate_anagram():
    c1=[0]*26
    c2=[0]*26
    len_iter1=len(first_str)
    len_iter2=len(second_str)
    while len_iter1>0:
        char_of_str=first_str[len_iter1 - 1]
        pos=ord(char_of_str) - ord('a')
        c1[pos]=c1[pos] + 1
        len_iter1=len_iter1 - 1
    while len_iter2>0:
        char_of_str=second_str[len_iter2 - 1]
        pos=ord(char_of_str) - ord('a')
        c2[pos]=c2[pos] + 1
        len_iter2=len_iter2 - 1
    len_iter3=len(c1)
    while len_iter3>0:
        num_of_list=len_iter3 - 1
        if c1[num_of_list] == c2[num_of_list]:
            pass
        else:
            print "Given strings are not anagrams"
            break
        len_iter3=len_iter3 - 1

if __name__=='__main__':
    validate_anagram()

########################## End of the Program

Function to download file from Google Spreadsheet

 Function to download file from Google Spreadsheet and then convert it into text files as according to our requirement after reading from Excel File downloaded from Google SpreadSheet.

Sample usage : python lang_parsing.py (path where we want to store our files)

Note : I am assuming spreadsheet doc name  "Music App Static Labels List v b1.0" and Sheet name "Sheet1". If this name changed in doc then change also has to be done in code to change name else all functions are generic.

##################### Start of the Program

#!/usr/bin/python

import os
import xlrd
import sys
import time
import datetime

dir_path=sys.argv[1]

## Iterating over the Excel sheet given in the Google spreadsheet
def iter_workbook(xls_file):
    workbook = xlrd.open_workbook(dir_path + "/" + xls_file)
    worksheet = workbook.sheet_by_name('Sheet1')
    num_rows = worksheet.nrows - 1
    num_cells=worksheet.ncols-1
    open_file(num_rows,num_cells,worksheet)

## Function to open file in write mode as language given in Excel Sheet
def open_file(num_rows,num_cells,worksheet):
    i=0
    while num_cells>=0:
        file_name=worksheet.cell_value(0,i)
        file_obj=open(dir_path + "/" + file_name+".txt","w")
        write_file(num_rows,worksheet,file_obj,i)
        file_obj.close()
        num_rows=worksheet.nrows - 1
        num_cells=num_cells - 1
        i=i+1

## Function to write file with the language name as given in Excel Sheet

def write_file(num_rows,worksheet,file_obj,i):
    j=1
    while num_rows>0:
        obj='"'+worksheet.cell_value(j,0)+'"="'+worksheet.cell_value(j,i)+'"'
        file_obj.write(obj.encode('utf8'))
        file_obj.write("\n")
        j=j+1
        num_rows=num_rows - 1

## Function to check Excel file downloaded from Google Spreadsheet
def calling_func():
    postfix = "xls"
    for xls_file in os.listdir(dir_path):
        if not xls_file.lower().endswith( postfix ):
            continue
        else:
            iter_workbook(xls_file)

## Function to download file from Google Spreadsheet with python inbuilt module googlecl

def downl_xls():
    os.system("google docs get --title='Music App Static Labels List v b1.0' --dest=languages ")
    try:
        os.system("cp languages.xls " + dir_path + " ")
        calling_func()
    except:
        print "Mentioned directory doesn't exist"

if __name__=="__main__":
    downl_xls()

######################## End of the Program

My Profile

My photo
can be reached at 09916017317