After a long time and a bunch of work, Will 2.0 is officially out. It brings a huge number of improvements, and this page is to catch you up on everything that's new, how to upgrade your existing Will installations (basically, things should Just Work), and the ways Will can now grow into the future. Let's dive in!
This is the big-ticket item. If you've been stuck with a lonely HipChat install while the world moved to Slack, you can now just turn on the Slack backend, add a token, and have your Will also live in Slack. It's really that easy.
Will can now connect to any number of backend services at once, and automatically routes messages to the right place. For the 2.0 release, that includes Slack, Rocket.Chat, Hipchat, and a Shell (stdin/stdout console) backend. Very shortly, he'll pick up Telegram and SMS support.
For folks who are sticking with the Atlassian/HipChat ecosystem, once Stride has a stable API, he'll work there too. Want Will to work with your favorite chat service? Submit a pull request, and we'll get it in (that's how Rocket.Chat is in there!)
The technical details are [ in the documentation ], but briefly, Will now has an IOBackend
class that's pretty straightforward to implement, and if you want to add a new service, all you need to do is wire it up.
Finally, all our bots are future-proofed.
As big as feature #1 was, #2 is more important in the long game. For all of 1.x, Will was basically an expression matching machine. You tell him to watch for "hi", he knows to say "Oh, hello!". Easy - but limited.
For 2.0, Will's brains are now pluggable. He ships with support for the same expression matching (with some bonus fuzzy matching) out of the box, but he's now a fully expandable, AI-capable bot.
Will's brains break down into three parts:
Analysis is where Will adds context, history, meaning, and metadata about a message. It can be used for things like sentiment analysis, natural language parsing, or whatever information you'd like to add for him to have when figuring out what to say.
Out of the box, he ships with a "history" backend, which provides recent messages he's heard in chat. Coming up will be textblob support to add part-of-speech tagging, sentiment analysis, translation, and more.
You know that part before you speak, where you're running through the things you might say? That's Will's generation cycle. Like analysis, he supports any number of pluggable generation backends, to generate all kinds of options and metadata about how confident they are about the option.
In 2.0, Will ships with generation backends for strict regular expression matching (the 1.x behavior), and fuzzy regex matching. There aren't concrete plans for next generation options, but I'm hopeful to see a pull request for a full-on chatterbot at some point.
In the end, there can be only one. Or at least, most of the time, that's what makes sense. The execution backend is responsible for looking through all the options created by generation, deciding on the best one, and doing it. It's also the part that checks to see if certain operations are allowed for a given user, or if there are any restrictions that prevent an action from happening.
Out of the box, Will's default execution backend simply chooses the best option available, and does it. He also ships with a (disabled) execution backend that does all of the options above a certain confidence threshold. Try what makes sense to you, and send PRs to keep improving him!
Will 2.0 has a totally rewritten core, built on a (you guessed it) pluggable architecture that will let you run Will how and where you want.
A couple of the big highlights:
Will 1.x was a hard-wired, tightly coupled system. Will 2.x uses a secure publish-subscription model to pass events between application components, with a main watcher to keep the key things moving.
Will now encrypts all data on both the pub-sub wire and storage backends. He ships with AES out of the box, and supports pluggable encryption backends for different security requirements.
Especially in the higher-risk environment a lot of bots are running in, and the increased business value of chat rooms, it made sense to keep all of Will's knowledge safe.
Pull requests to provide different options and improve security are welcome!
In keeping with the philosophy, Will 2.x provides pluggable backends for storage (Redis, Couchbase, and File are built-in), and pub-sub (Redis out of the box and ZeroMQ on the way.)
Adding other backends is as simple as subclassing, implementing a few methods, and submitting a pull request.
Personally, I hold Python 3 to be a massive blunder, and still wonder if the community will ever recover from it. Having built companies both in Python 2 and Python 3, I'm never doing 3 again.
But it's also the version that a lot of people are using, working in, and learning Python from - and it's important for Will to be in that world. So, from this release forward, Will will support both Python 2 and Python 3. Python 4, whenever it comes, will also be added in.
Will 2 moves us from a scrappy little library to a software package that's going to be around for the long term - one that's safe to build our businesses and bots on. Language support is an integral part of that shift.
Whether you've been using Will for four years like some of us old hands, or are just getting started, 2.x will ensure that Will continues to grow, improve, and stay relevant far into the future.
I deeply believe that platform lock-in isn't an ok thing to do to people, and we want no part of keeping you, your bots, or the great things you develop locked up. It's a big part of why Will's been architected to be pluggable in so many places - the world will continue to grow and change, and we think you and your Will should be able to grow and change with it.
So most of all, I'd like to say hop on in, contribute your improvements, and keep Will growing into the future. The best stuff is yet to come. :)
-Steven