Connect to the Server.
This first call creates the TCP/IP connection, authenticates you to get access rights, and establishes the initial context for your work (IE which database you'll be using). You get back a connection object which manages the connection.
>>> db = MySQLdb.connect("motif.compbio.cs.cmu.edu","user","password","CDART")
>>> type(db)
< class 'MySQLdb.connections.Connection'>
#do work
>>> db.close()
|
In our lab, the module DurandLab stores many variables common to our environment. It also wraps the MySQLdb "connect()" calls so that you only have to change the code in one place if you move the database.
>>> import DurandLab
>>> db = db.getConn("user","password","CDART")
>>> type(db)
< class 'MySQLdb.connections.Connection'>
#do work
>>> db.close()
|
Managing connections can be a real hassle when writing functions rather than stand-alone utilities. If you call a function that creates a connection in a loop, you may end up flooding the database with too many connections. One way around that is to create connections once per module, so that when the module is first imported the connection is made. Then each function refers to the global connection. This isn't thread-safe, but it's convenient for non-threaded programs.
Sometimes, you want to make a new query before retrieving results from the last query. This is the reason for the cursor object.
The cursor object contains the functions to perform actual queries, like fetchone(). In Oracle and other DBs, cursors allow you to "check out" a temporary state in the database, and manipulate data without committing to the db until ready. In MySQL, cursors are handled only on the client side. You can perform seperate queries and fetchone() from the results using two cursor objects, but any updates to the database will effect both cursors.