A. Jesse Jiryu Davis

Sewing

December 2014. Sewing comforters for animals in shelters, during the Village Zendo's year-end meditation retreat. Zen emphasizes work as a meditative and spiritual practice. During our year-end retreat we sew comforters for [...]

December 2014. Sewing comforters for animals in shelters, during the Village Zendo's year-end meditation retreat.

Zen emphasizes work as a meditative and spiritual practice. During our year-end retreat we sew comforters for sheltered animals and donate them to The Mayor's Alliance for NYC's Animals. The comforters don't just give animals something soft to lie on: people are more willing to adopt a dog or cat when they see it on a cute comforter. The comforter goes with the animal to its new home and helps it make the transition.

Shinko


Yugetsu


Comforters


Seizan

Year End Sesshin Photos, Part I

December 2014. For the second year, the Village Zendo held our year-end meditation retreat at the Garrison Institute. To my surprise, I found time to shoot even while helping to lead the retreat. This is the first batch, there will be some [...]

December 2014. For the second year, the Village Zendo held our year-end meditation retreat at the Garrison Institute. To my surprise, I found time to shoot even while helping to lead the retreat. This is the first batch, there will be some nature photos and portraits in the next ones.


Work meeting


Zazen


Walking meditation


Zazen at the Garrison Institute


Exterior of the Garrison Institute


Bamboo garden at the Garrison Institute


Grounds of the Garrison Institute

2014: My Year In Review

I'm inspired by Katie Cunningham's blog post "looking back and looking forward" to review my year and the prospects for 2015. 2014 Python Most of the code I wrote this year was open source Python for MongoDB: Bernie Hackett and I shipped [...]

Manhattan

I'm inspired by Katie Cunningham's blog post "looking back and looking forward" to review my year and the prospects for 2015.

2014

Python

Most of the code I wrote this year was open source Python for MongoDB: Bernie Hackett and I shipped PyMongo 2.7, and we began an overhaul of PyMongo to correct its regrettable designs, which we'll release in a couple months as PyMongo 3.0.

I made two versions of Motor, my non-blocking MongoDB driver for Tornado. Motor 0.2 was a near-rewrite of the driver, while Motor 0.3 was a more routine set of improvements.

I spent a large part of the year specifying logic in YAML and English instead of Python: I wrote the Server Discovery And Monitoring Spec to standardize how all MongoDB drivers talk to clusters of database servers.

For the first time, I added a feature to the Python standard library: I wrote a set of queue classes for asyncio, which shipped with Python 3.4 in March.

Speaking

I spoke at some neat conferences: PyCon, OSCON, MongoDB World. I lost my nervousness and had fun, even on the big PyCon stage. The foremost conference in my affections this year, as always, was Open Source Bridge. It's put on by the Stumptown Syndicate, whose mission is to "create resilient, radically inclusive tech and maker communities that empower positive change." The generosity of that statement is really put into practice at the conference.

Mentorship

To my surprise, I had a good time mentoring three young coders. In the year previous, I'd done all I could to shirk my responsibilities as a mentor, and the result was predictable: I failed at mentorship and everyone was mad at me. This year I turned it around. I guided a pair of interns as they contributed to the high-performance Monary driver. They weren't just successful at improving Monary—they got a talk into a local Python conference, and decided to work at MongoDB full time when they graduate. When my interns went back to school I was assigned a talented new hire to continue the project. She was just as productive as my interns had been, and presented her work to a data science conference.

I had not expected to feel so fulfilled by these junior coders' achievements. I've had to update how I see myself: what I care about, what my purpose is. For years I've resisted roles that take time from my own coding, but now I've tasted the fruit of leadership. I'm going to speak about this at PyTennessee in February and write about it soon.

Zen

My Zen practice is developing similarly: my teacher Enkyo Roshi asked me to be the Village Zendo's shuso, the "practice leader", this winter. So far, my duties have been mainly logistical, and not too strenuous. But over the next three months I'll help set the agenda for our temple, and in March I'll give my first dharma talk. The group will test me in dharma combat. If I pass, I'll be a senior student in our sangha and a regular speaker on Zen. The same way that my role as an individual coder is ending at work, my self-focused practice is over, replaced by practicing for the sake of my community.

Regrets

With all this excitement, I had to put some pursuits on hold. I took just a handful of good photos this year, and read hardly any good books. My one trip abroad was so abbreviated that I flew all the way to Taipei and only spent one day sightseeing.

2015

Writing and Speaking

What's 2015 about? I'm already committed to a frightening volume of writing and speaking: I'm giving two talks at PyCon and I hope to speak at MongoDB World and Open Source Bridge again. I also have the intimidating assignment of a chapter in the Architecture of Open Source Applications series. Fortunately, I have a very smart coauthor.

Python

I'll keep writing open source Python at MongoDB. We're going to release an overhauled PyMongo and say goodbye to five years of accumulated cruft. The current PyMongo is a very high-quality codebase, but the anguish of maintaining that level of quality despite all its baggage is not something I'll miss. Motor, meanwhile, will gain superpowers: it won't just support Tornado, but also asyncio, and perhaps even Twisted. If I achieve this Python async trifecta, expect me to brag about it at length.

I want to make a big contribution to a non-MongoDB Python package this year. Much like I added queues to asyncio in 2014, I'm going to add locks and queues to Tornado. The synchronization primitives currently living in my Toro package will become part of Tornado.

C

Despite all these Python commitments, 2015 will be the year I become bilingual. The plan is, I'm going to become sufficiently expert in C that I can join MongoDB's C Driver team, perhaps even lead it. Like my other plans, this one is intimidating. I'll need a great mentor, and I'm lucky to have one: our current C driver developer Jason Carey has agreed to show me the ropes. Wish me luck.

Zen

I have a great Zen mentor too: Enkyo Roshi has for some reason decided that I should become a senior Zen student this year, and I have to trust she knows what she's doing. If all goes well, I'll start giving dharma talks at the Village Zendo and at Sing Sing this year. I have a lot of opinions about Zen, but that won't amount to much when it comes time to speak. So please, wish me luck with this, too.

It Seemed Like A Good Idea At The Time: MongoReplicaSetClient

The road to hell is paved with good intentions. I'm writing post mortems for four regrettable decisions in PyMongo, the standard Python driver for MongoDB. Each of these decisions made life painful for Bernie Hackett and [...]

Road

The road to hell is paved with good intentions.

I'm writing post mortems for four regrettable decisions in PyMongo, the standard Python driver for MongoDB. Each of these decisions made life painful for Bernie Hackett and me—PyMongo's maintainers—and confused our users. This winter we're preparing PyMongo 3.0, and we have the chance to fix them all. As I snip out these regrettable designs I ask, what went wrong?

I conclude the series with the final regrettable decision: MongoReplicaSetClient.


The Beginning

In January of 2011, Bernie Hackett was maintaining PyMongo single-handedly. PyMongo's first author Mike Dirolf had left, and I hadn't yet joined.

Replica sets had been released in MongoDB 1.6 the year before, in 2010. They obsoleted the old "master-slave replication" system, which didn't do automatic failover if the master machine died. In replica sets, if the primary dies the secondaries elect a new primary at once.

PyMongo 2.0 had one client class, called Connection. By the time our story begins, Bernie had added most of the replica-set features Connection needed. Given a replica set name and the addresses of one or more members, it could discover the whole set and connect to the primary. For example, with a three-node set and the primary on port 27019:

>>> # Obsolete code.
>>> from pymongo import Connection
>>> c = Connection('localhost:27017,localhost:27018',
...                replicaset='repl0',
...                safe=True)
>>> c
Connection([u'localhost:27019', 'localhost:27017', 'localhost:27018'])
>>> c.port  # Current primary's port.
27019

If there was a failover, Connection's next operation failed, but it found and connected to the primary on the operation after that:

>>> c.db.collection.insert({})
error: [Errno 61] Connection refused
>>> c.db.collection.insert({})
ObjectId('548ef36eca1ce90d91000007')
>>> c.port  # What port is the new primary on?
27018

(Note that PyMongo 2.0 threw a socket error after a failover: we consistently wrap errors in our ConnectionFailure exception class now.)

Reading From Secondaries

The Connection class's replica set features were pretty well-rounded, actually. But a user asked Bernie for a new feature: he wanted a convenient way to query from secondaries. Our Ruby and Node drivers supported this feature using a different connection class. So in late 2011, just as I was joining the company, Bernie wrote a new class, ReplicaSetConnection. Depending on your read preference, it would read from the primary or a secondary:

>>> from pymongo import ReplicaSetConnection, ReadPreference
>>> rsc = ReplicaSetConnection(
...    'localhost:27017,localhost:27018',
...    replicaset='repl0',
...    read_preference=ReadPreference.SECONDARY,
...    safe=True)

Besides distributing reads to secondaries, the new ReplicaSetConnection had another difference from Connection: a monitor thread. Every 30 seconds, the thread proactively updated its view of the replica set's topology. This gave ReplicaSetConnection two advantages. First, it could detect when a new secondary had joined the set, and start using it for reads. Second, even if it was idle during a failover, after 30 seconds it would detect the new primary and use it for the next operation, instead of throwing an error on the first try.

ReplicaSetConnection was mostly the same as the existing Connection class. But it was different enough that there was some risk: the new code might have new bugs. Or at least, it might have surprising differences from Connection's behavior.

PyMongo has special burdens, since it's the intersection between two huge groups: MongoDB users and the Python world, possibly the largest language community in history. These days PyMongo is downloaded half a million times a month, and back then its stats were big, too. So Bernie tread very cautiously. He didn't force you to use the new code right away. Instead, he made a separate class you could opt in to. He released ReplicaSetConnection in PyMongo 2.1.

The Curse

But we never merged the two classes.

Ever since November 2011, when Bernie wrote ReplicaSetConnection and I joined MongoDB, we've maintained ReplicaSetConnection's separate code. It gained features. It learned to run mapreduce jobs on secondaries. Its read preference options expanded to include members' network latency and tags. Connection gained distinct features, too, diverging further from ReplicaSetConnection: it can connect to the nearest mongos from a list of them, and fail over to the next if that mongos goes down. Other features applied equally to both classes, so we wrote them twice. We had two tests for most of these features. When we renamed Connection to MongoClient, we also renamed ReplicaSetConnection to MongoReplicaSetClient. And still, we didn't merge them.

The persistent, slight differences between the two classes persistently confused our users. I remember my feet aching as I stood at our booth at PyCon in 2013, explaining to a user when he should use MongoClient and when he should use MongoReplicaSetClient—and I remember his expression growing sourer each minute as he realized how irrational the distinction was.

I explained it again during MongoDB Office Hours, when I sat at a cafeteria table with a couple users, soon after we moved to the office in Times Square. And again, I saw the frustration on their faces. I explained it on Stack Overflow a couple months later. I've been explaining this for as long as I've worked here.

The Curse Is Lifted

This year, two events conspired to kill MongoReplicaSetClient. First, we resolved to write a PyMongo 3.0 with a cleaned-up API. Second, I wrote the Server Discovery And Monitoring Spec, a comprehensive description of how all our drivers should connect to a standalone server, a set of mongos servers, or a replica set. This spec closely followed the design of our Java and C# drivers, which never had a ReplicaSetConnection. These drivers each have a single class that connects to any kind of MongoDB topology.

Since the Server Discovery And Monitoring Spec provides the algorithm to connect to any topology with the same class, I just followed my spec and wrote a unified MongoClient for PyMongo 3. For the sake of backwards compatibility, MongoReplicaSetClient lives a while longer as an empty, deprecated subclass of MongoClient.

The new MongoClient has many advantages over both its ancestors. Mainly, it's concurrent: it connects to all the servers in your deployment in parallel. It runs your operations as soon as it finds any suitable server, while it continues to discover the rest of the deployment using background threads. Since it discovers and monitors all servers in parallel, it isn't hampered by a down server, or a distant one. It will be responsive even with the very large replica sets that will be possible in MongoDB 2.8, or the even larger ones we may someday allow.

Unifying the two classes also makes MongoDB URIs more powerful. Let's say you develop your Python code against a standalone mongod on your laptop, then you test in a staging environment with a replica set, then deploy to a sharded cluster. If you set the URI with a config file or environment variable, you had to write code like this:

# PyMongo 2.x.
from pymongo.uri_parse import parse_uri

uri = os.environ['MONGODB_URI']
if 'replicaset' in parse_uri(uri)['options']:
    client = MongoReplicaSetClient(uri)
else:
    client = MongoClient(uri)

This is annoying. Now, the URI controls everything:

# PyMongo 3.0.
client = MongoClient(os.environ['MONGODB_URI'])

Configuration and code are properly separated.

The Moral Of The Story

I need your help—what is the moral? What should we have done differently?

When Bernie added read preferences and a monitor thread to PyMongo, I understand why he didn't overhaul the Connection class itself. The new code needed a shakedown cruise before it could be the default. You ask, "Why not publish a beta?" Few people install betas of PyMongo. Customers do thoroughly test early releases of the MongoDB server, but for PyMongo they just use the official release. So if we published a beta and received no bug reports, that wouldn't prove anything.

Bernie wanted the new code exercised. So it needed to be in a release. He had to commit to an API, so he published ReplicaSetConnection alongside Connection. Once ReplicaSetConnection was published it had to be supported forever. And worse, we had to maintain the small differences between Connection and ReplicaSetConnection, for backwards compatibility.

Maybe the moment to merge them was when we introduced MongoClient in late 2012. You had to choose to opt into MongoClient, so we could have merged the two classes into one new class, instead of preserving the distinction and creating MongoReplicaSetClient. But the introduction of MongoClient was complex and urgent; we didn't have time to unify the classes, too. It was too much risk at once.

I think the moral is: cultivate beta testers. That's what I did with Motor, my asynchronous driver for Tornado and MongoDB. It had long alpha and beta phases where I pressed developers to try it. I found PyMongo and AsyncMongo users and asked them to try switching to Motor. I kept a list of Motor testers and checked in with them occasionally. I ate my own hamster food: I used Motor to build the blog you're reading. Once I had some reports of Motor in production, and I saw it mentioned on Stack Overflow, and I discovered projects that depended on Motor in GitHub, I figured I had users and it was time for an official release.

Not all these methods will work for an established project like PyMongo, but still: for PyMongo 3.0, we should ask our community to help shake out the bugs.

When the beta is ready, will you help?


This is the final installment in my four-part series on regrettable decisions we made with PyMongo.

It Seemed Like A Good Idea At The Time: PyMongo's "copy_database"

The road to hell is paved with good intentions. I'm writing eulogies for four regrettable decisions we made when we designed PyMongo, the standard Python driver for MongoDB. Each of them made maintaining PyMongo painful, and confused [...]

Road

The road to hell is paved with good intentions.

I'm writing eulogies for four regrettable decisions we made when we designed PyMongo, the standard Python driver for MongoDB. Each of them made maintaining PyMongo painful, and confused our users. This winter, as I undo these regrettable designs in preparation for PyMongo 3.0, I carve for each a sad epitaph.

Today we reach the third regrettable decision: "copy_database".


The Beginning

In the beginning, MongoDB had a "copydb" command. Well, not the beginning, but it was an early feature: MongoDB was less than a year old when Dwight Merriman implemented "copydb" in September 2008.

The initial protocol was simple. The client told MongoDB the source and target database names, and MongoDB made a copy:

copydb

You could give the target server a "fromhost" option and it would clone from a remote server, similar to how a replica set member does an initial sync:

copydb fromhost

This is a really useful feature for sysadmins who occasionally copy a database using the mongo shell, but of course it's not a likely use case for application developers. So the mongo shell has a "db.copyDatabase" helper function, but at the time none of our drivers did.

A year later, in January 2010, a user wanted to do "copydb" from a remote server with authentication. Aaron Staple came up with a secure protocol: as long as the client knows the password for the source server, it can instruct the target server to authenticate, without revealing its password to the target server. The client tells the target to call "getnonce" on the source, and the source responds with a nonce, which the target forwards to the client:

copydbgetnonce

Then the client hashes its password with the nonce, and gives the hashed password back to the target server, allowing the target to authenticate against the source once:

copydb with auth

There's one important detail (imagine foreboding music now): "copydbgetnonce" and "copydb" must be sent on the same socket. This will be important later.

In any case, Aaron added support for this new protocol to MongoDB and to the mongo shell, so sysadmins could copy a database from a password-protected remote server. So far so good. But in a moment, we would make a regrettable decision.

PyMongo and "copy_database"

Nicolas Clairon, author of the PyMongo wrapper library MongoKit, asked us to add a feature to PyMongo. He wanted PyMongo to have a special helper method for copydb so "every third party lib can use this method". PyMongo's author, Mike Dirolf, leapt to it: just two days later, he'd implemented a "copy_database" method in PyMongo, including support for authentication.

I understand why this seemed like a good idea at the time. Let's avoid duplication! Put "copy_database" in PyMongo, so every third party lib can use it! No one asked whether any users actually executed "copy_database" in Python. Mike just went ahead and implemented it. How could he know he was setting a course to hell?

Requests, Again

Remember how I said copydbgetnonce and copydb must be sent on the same socket? Well, that wasn't a problem for Mike: at this time PyMongo always reserved a socket for each thread, and you couldn't turn this "feature" off. So if one thread called copydbgetnonce and then copydb, the two commands were sent on the same socket automatically.

But, as I described in my "start_request" story, after Mike had left and I joined the company, I made major connection pooling improvements. This included the ability for threads to freely share sockets in the connection pool. For real applications this dramatically increased PyMongo's efficiency. But it was bad news for "copy_database" with auth: now we needed a special way to ensure that the two commands were executed on the same socket. So I had to update "copy_database": Before calling copydbgetnonce, it checked if the current thread had a socket reserved. If not, it reserved one. Then it called the two commands in a row. Finally, it returned the socket, but only if the socket had been specially reserved for the sake of "copy_database".

There were already two code paths for "copy_database": one with authentication and one without. Now there were four: with and without authentication, with and without a socket already reserved for the current thread. Since concurrency bugs were a greater threat, I bloated the test suite with a half-dozen tests, probing for logic bugs and race conditions.

Motor

Six months after I'd made these changes to PyMongo's connection pool and its "copy_database" method, I first announced Motor, my asynchronous driver for Tornado and MongoDB. Motor wraps PyMongo and makes it asynchronous, allowing I/O concurrency on a single thread, using Tornado's event loop.

Tricking PyMongo into executing concurrently on one thread was straightforward, actually, except for one method: "copy_database". It wants to reserve a socket for the current thread, but in Motor, many I/O operations are in flight at once for the main thread. So I had to reimplement "copy_database" from scratch just for Motor. I also reimplemented all PyMongo's "copy_database" tests, and distorted Motor's design so it could reserve sockets for asynchronous tasks, purely to support "copy_database".

I made a horrible mistake, too: I introduced a bug in Motor's "copy_database" that leaked a socket on every single call, but no one ever complained. The method clearly was risky, and unused.

What the hell was I thinking when I added "copy_database" to Motor? Why would anyone need it? We'd seen no signs, after all, that anyone was even using "copy_database" in PyMongo. And compared to PyMongo, Motor is optimized for web applications with lots of small concurrent operations. It's not intended for rare, lengthy tasks like "copy_database". But I was new at the company and I was excited about making Motor feature-complete: it would include every PyMongo feature, no matter how silly.

SCRAM-SHA-1

The breaking point for the PyMongo team came this fall, when MongoDB 2.8 introduced a new authentication mechanism, SCRAM-SHA-1. When 2.8 is released, SCRAM-SHA-1 will be the new default. And it's not just a better way of hashing passwords: it requires a different authentication protocol than the old MongoDB Challenge-Response mechanism. The old "copydbgetnonce" trick doesn't work with SCRAM-SHA-1.

Our senior security engineer Andreas Nilsson devised a new protocol for copydb with SCRAM-SHA-1, using a SASL conversation. It's more complex than the old protocol:

copydbsaslstart

This accomplishes the same goal as the old copydbgetnonce protocol: it allows the target server to log in to the source server once, without the client revealing its password to either server. But instead of two round trips, three are now required. Andreas added the new protocol to the mongo shell. Bernie had already implemented PyMongo's support for authentication with SCRAM-SHA-1, and he asked me to add it to our "copy_database" helper, too.

I've worked at MongoDB over three years, but I'm still prone to rushing headlong into new features. "I know!" I thought, "I won't just add SCRAM-SHA-1 to copy_database, I'll add LDAP and Kerberos, too!" It took Bernie and Andreas some effort to talk me down. I scaled the work to reasonable proportions. My final patch is tight.

But still, PyMongo's "copy_database" is silly. It now has eight code paths. It can run without authentication, or use SCRAM-SHA-1, or use the old authentication mechanism, or it can try to guess which mechanism to use. And it's implemented both for MongoClient and for MongoReplicaSetClient. The code is hard to follow and the test footprint is like a sasquatch's.

I began to contemplate adding SCRAM-SHA-1 to Motor's "copy_database", too, when suddenly I had a thought: what if we could stop the pain? What if we could just...delete "copy_database"?

Redemption

This year the company has created a dedicated Product Management team whose job is to know what users want, or to find out. Before, each of us at MongoDB had our various contacts with users—Bernie and I knew what Python programmers asked about, salespeople knew what customers asked for, the support team knew what questions people called with—but like any startup we were flying by the seat of our pants when we made decisions about what features to add and maintain.

Now we have a group of professionals gathering and sorting this data. This group can answer my question, "Hey, does anyone care about PyMongo's copy_database method, or is the mongo shell's method the only thing people use?" They researched for a few days and replied,

Consensus from the field is that copydb comes up very little, whether across hosts or not. They are generally OK with not supporting it in the drivers as it is a more administrative task anyway, but would want it supported by the shell.

Things were looking up. Maybe we could just delete the damn method. We polled the drivers team and found that of all the eleven supported MongoDB drivers, only PyMongo, Motor, and the Ruby Driver have a "copy_database" method. And the Ruby team plans to remove the method in version 2.0 of their driver. So we'll remove it from PyMongo in the next major release, and Motor too. Not only will we delete risky code, we'll be more consistent with the other drivers.

Bright Future

In PyMongo's next release, version 2.8, "copy_database" still works; in fact it gains the ability to do SCRAM-SHA-1 authentication against the source server. But it's also deprecated. In PyMongo 3.0 "copy_database" will be gone, and good riddance: there's no evidence that anyone copies databases using Python, and it's one of the most difficult features to maintain. It'll be gone from Motor 0.4 as well.

The lesson I learned is similar to last week's: gather requirements. But this time, we didn't just make up a useless feature: someone actually asked us for it. Even so, we should have turned him down. "Innovation is saying no to a thousand things," according to Steve Jobs.

Features are like children. They're conceived in a moment of passion, but you must support them for years. Think!


The final installment in "It Seemed Like A Good Idea At The Time" is MongoReplicaSetClient.

If Siddhartha Didn't Leave Home

Last night I sat in meditation all night with my sangha, The Village Zendo. It's the Zen way to celebrate the anniversary of Buddha's enlightenment. It occurred to me some time in the middle of the night, that I'm older now than he was the [...]

Rohatsu

Last night I sat in meditation all night with my sangha, The Village Zendo. It's the Zen way to celebrate the anniversary of Buddha's enlightenment. It occurred to me some time in the middle of the night, that I'm older now than he was the night he had his great awakening. It's like when you realize you're older than your parents were. I have a tender feeling for him that I wasn't prepared for.

But the night he awakened was not his hardest night. Actually, I get the sense he was pretty confident that night. He knew he was on the home stretch. No, the hardest night in Siddhartha Gautama's life was the night he left his palace, six years earlier.

He was only 29 then; he'd been raised in privilege and luxury, and he had never suffered for a moment. But he'd seen how others suffered, and it shocked him. He resolved to live as a homeless monk and study meditation until he discovered the way to liberate everyone from suffering.

I think Siddhartha had made up his mind firmly to leave his luxurious life behind, but leaving his wife and son was almost too hard to bear. Before he stole away from the palace that night he snuck to his wife's bedroom to look at them one last time. He padded down the dark hallway. His soft aristocratic feet were silent on the marble floor. He lifted the heavy curtain and looked at his wife and son as they slept. He would not see them again for six years.

I recently learned a new detail: according to Kenneth Kraft's The Wheel of Engaged Buddhism, there's a version of the story where Siddhartha couldn't see his son Rahula's face, because his wife Yasodhara had flung her arm across him as she slept. Siddhartha was denied the final glimpse of his son.

I know another version where Yasodhara only pretended to sleep. She knew Siddhartha's plan to escape the palace and seek the path of liberation, and she wanted him to accomplish it. Maybe she lay still, when she heard him lift the curtain, because she thought that if she met his eyes he'd lose his nerve, and stay. I wonder if she hid Rahula's face, too, because if Siddhartha saw him he would not have gone.

(Yasodhara cared about his mission because she was a passionate way-seeker herself: years later she became a nun and attained enlightenment.)

But what if Siddhartha had turned back? Maybe it would have been just as good. After all, there's the story of Vimalakirti, a family man who didn't leave home, but his insight was so great it rivaled the Buddha's. Vimalakirti was a wealthy businessman who managed great farmlands, owned huge herds of oxen, awarded genius grants, gave TED talks, and somehow he was still able to awaken. So perhaps if Siddhartha had turned back and stayed with his wife and son, and gone into politics like his father, he might still have established the path of liberation. He could have reformed the kingdom, abolished the caste system, guaranteed universal health insurance. We might still practice his way today, even if he hadn't become a monk.

I was 23 when I left home to study meditation. The life I left was pretty nice, like Siddhartha's: I had a fun job at a software company in Austin, a girlfriend, a new silver Honda. And like him, I was dissatisfied. I was smoking pot twice a day, failing at my job, being a jerk to my girlfriend. I needed to wake myself up, so I left. I went to Yokoji Zen Mountain Center to live for a year. The place was cold and austere and we meditated for hours every day and I didn't make any money, but still: it's one of my favorite years, and it set my life on the right track.

When my year was up, I had to choose whether to stay in the monastery or return to regular life. I chose to give up the Buddha's simple life and take up Vimalakirti's: work and sex and money and stress. And now that I'm older than the Buddha was, I know for certain I made the right choice.