If you’ve been following along with the show, you’ll have learned what the investor-founder dynamic is like from the viewpoint of an entrepreneur. In the last episode, I sat down with the CEO & Co-Founder of StyleSeat, Melody McCloskey, to understand what it takes to raise capital at the various stages of a startup, from Seed to Series A and beyond.
In today’s episode, we’re going to get the opposite perspective by diving into an investor’s mindset. We’ll try to understand what compels them to write a check and, more importantly, how they add value beyond a check.
(That’s right, they’re not just a source of capital; they can be an indispensable partner!)
I’ve invited Shruti Gandhi to chat with us. Shruti is the Founding and Managing Partner at Array VC, a fund that invests in early stage startups. She began her career as software engineer. Since then, she has been a startup founder and an investor in five funds; now, she has started her own.
Having been in many roles, Shruti understands the importance of capital, but she also knows firsthand that founders need help beyond the check.
During our conversation, you’ll learn:
- How investors can help you accelerate your company’s sales
- How investors test a founder’s commitment to their product and company
- How to work with different types of investors: corporate, institutional, and angels
- Why investors look for founders who have been an early employee at a startup before striking out on their own
- How to dig into an investor’s thesis to find out if they are a right fit for you
What I found most refreshing about this episode is how Shruti shares learnings from both successful and “failed” experiences. The failures have taught her the importance of investors who are helpful to founders in a variety of functions.
This willingness and ability to help is something we should all be looking for in the people we choose to partner with!
So if you’re an early employee at a startup or a founder, watch this episode and learn how to scout the investors who are going to add value to your company beyond the check.
After you’ve watched the video, let us know what your favorite part was in the blog comments.
Stay tuned for the next of FemgineerTV in November. I’ll be hosting Danielle Morrill, the Co-Founder and CEO of Mattermark. Subscribe to our YouTube channel to know when it’s out!
On October 12th, Hillel Glazer and I hosted our first Agile Baltimore Unconference. Sure, there are other Agile events in the area but I really wanted to create something that was independent and “felt” like Baltimore. The result? The first Agile Baltimore Unconference! I wanted to organize an event that would provide value to both the sponsors and the attendees, at a reasonable price. With less than five days remaining, the event was sold out with 100 attendees.
As part of the registration, we provided an event t-shirt (thank you smartlogic for the t-shirt design and sponsorship). The t-shirts came out great. The Agile Baltimore logo was on the chest and the event sponsors were proudly displayed on the back, with a skyline of Baltimore. It was top-notch design work and printing.
Food was supplied by Sunshine Gille. They provided, breakfast, lunch and cocktail hour. I’m sure others would agree, the Greek lunch was pretty damn good. If you’re local to Baltimore, I recommend you look them up for your next event. They were really friendly and easy to work with.Schedule
So, what is an unconference?
A loosely structured conference emphasizing the informal exchange of information and ideas between participants, rather than following a conventionally structured program of events.
You might have read my blog post back in August, titled Divergence at Agile2015. If I wanted to structure a traditional conference, there would have been a ton more time spent planning the tracks and schedule. For this event, I wanted to focus on getting people to share ideas around the themes of Lean, Agile, Startups, and Technology. I honestly do not believe anyone could say what the most important topics could be 1-3-6 months out. Having the attendees plan out the schedule the day of the event would ensure we would have the more relevant topics.
Never underestimate the power of self-organization. In the first hour and a half of the event, we took somewhere between 300 and 500 ideas and narrowed it down to 9. If you want to see some great pictures of this in action, click on the picture links below.Sponsors
When putting this event together, I had a choice. I could charge attendees a lot more money or get some sponsors. After doing some quick math, I realized that we needed sponsors. That’s a blog post all on its own. Get too many sponsors and too few registrations and I could forget ever having another event. My goal was to have sponsors who I personally like. I did not want sponsors who would create a local turf war. Of course, I wanted Agile Alliance on board. Done. ETC Baltimore was helping us with the space. Done. Smartlogic approached us about doing the t-shirt designs and printing. Done.
What was left? Breakfast, lunch, the after-party, lanyards, and enough post-it notes and sharpies to satisfy the needs of a small army of Agilists. Thank you very much to Rally Software and Leankit, both of which had booths and gave product demos.
To underwrite the event, LeadingAgile was there every step of the way. When we needed to write a check for food and not all of the sponsor checks had arrived, it was great to know LeadingAgile had my back.Photos and Videos Photos
Photos were saved to the Agile Baltimore Facebook photo stream. You can see even more out on the Agile Baltimore Lean coffee meetup site. If you were at the event and took pictures, please send me a copy! I’ll get them posted.Videos
This was my first real attempt at videography at an event. I used Periscope to do some live streaming on Twitter. I then saved the videos out to the [Derek Huether YouTube channel]. Periscope won’t let you record with a landscape orientation yet so you’ll just suffer through the black bars on either side of the videos. In the video below, you can see an exchange between Mike Cottmeyer, Paul Boos, and others about buy-in techniques when trying to go Agile. Paul Boos is heard saying “Something that promotes individual heroes is an impediment” [YouTube Video]Summary
Never be afraid to try something you’ve never done before. I have a newfound respect for those who put on events. It can be exhausting! My heart goes out to the Purple Shirts that I see each year at the Agile20xx events of the Agile Alliance. I wasn’t wearing a purple shirt but I was super busy doing behind-the-scenes work. Upon doing a retrospective at the end of the day, I would consider some changes next time around.
- I would like to see lightning conversations or maybe an entire track dedicated to Lean Coffee
- I would like to offer a coaches corner, where attendees could go ask a qualified coach for some help
- Lastly, ironically, I would consider adding a planned track.
Keep your eyes open for the next event. I doubt I can wait an entire year to do this again. Maybe we can do this in Denver or Atlanta? Maybe we can do a half-day Lean Coffee event?
What are your thoughts?
It’s a bit like a opening a present and finding another present inside… if you’re not expecting it, it can be a bit jarring. But if you know how to work with this idea, it can be fun and exciting!Common Examples
There are a lot of great uses for higher order functions, including many common functions like “debounce” or “throttle” found in the underscore / lodash libraries.
In this example, a throttle function is used to prevent a window resize event in a browser, from constantly re-working the layout of the screen.
The basic idea is to pass in a function and a context object to represent “this” within that function.
Notice that the “bind” function itself doesn’t do anything more than return another function. Some higher order functions will do some calculations and setup some other code before returning the inner function, and others won’t. The important thing is that the bind function returns another function.
The inner function – the one that gets returned – is where the real magic happens in this case. When this function is returned, it will be assigned to a variable of your choosing. That variable is now pointing to a function that will:
- split the current arguments object into a proper array
- call the original function (passed as the “fn” parameter), and
- apply the “ctx” variable as the context (“this”) for the original function
This very simple example creates a “foo” function that logs “this.bar”. The bind function is used to create a function with an object literal specified as the context. Calling the resulting function will produce the expected “this is a test” console message.Dynamically Constructing Object Methods
In a single page app, for example, you may have some code to render specific views onto the screen when someone clicks a link or a button in a menu. Each of the menu items produces a different view, but all of them must be shown within the same basic layout.
When the app is already up and running, you would render the new view in to the proper part of the layout. But, if the user hits the refresh button on their browser, you want to make sure the layout is in place before putting the view into it.
To manage this, you could have a “showLayout” function that returns a promise. Inside each method that shows a specific view, you can call this method and wait for the promise to resolve.
This works, but it starts to get ugly with all the .then calls. Also, if you need to change how the showLayout method is called, you have to change it in every one of these methods. Maybe that’s not a big deal, maybe it is.
Using higher order functions, though, this code can be simplified. Instead of putting the showLayout().then() code in every single function, use a higher order function to construct that behavior at runtime:
Here, the showLayout code is encapsulated in a single location – the “useLayout” function. This function returns functions that know about the MyApp object and assume they will be attached to that object.
The useLayout function may be less than re-usable outside of the MyApp object, but that’s ok. The purpose was to encapsulate the call to showLayout so that this call can be changed quickly and easily, as needed.
Reducing the amount of code duplication is nice, and it’s also allowed a reduction in the amount of code for each of the methods. They are easier to read and easier to modify.
But what happens if you need to pass additional parameters to the methods on MyApp at runtime? Say, an ID from a router?Passing Args From The Function Call
If your app needs an ID or other data passed from a router, the useLayout function will have to account for this. It will still need to pass the layout along to the destination function, but it will also need to pass the id or any other parameters along as well.
To do this, modify the function that is returned from useLayout and have it track the arguments (using the …rest operator from ES6). Then, just before fn.apply is called, add the layout object to the args array – prepend or append doesn’t matter.
Now when the method on MyApp accepts a parameter, you will still get the layout and the parameter specified in the call:
The layout is injected as the first parameter to the editThing method implementation, but this is done behind the scenes. When calling editThing, only the ID needs to be specified. The rest of it is handled by the implementation of useLayout.Put The Fun In Higher Order Functions
With a combination of functions returning functions and a fair understanding of arguments, “this”, and .apply, you can easily compose objects out of re-useable behaviors with very little code. This means less code to write, less code to maintain, and easier changes to the implementation of the common behaviors.
About the Presentation
Welcome to the SlideShare recap of my presentation, “The Shape of Uncertainty,” given at the DevOps Enterprise...
Pivotal Tracker is an always-up-to-date tool that you leave open all day. When Tracker was young, the only way to accomplish that was some variation of HTTP polling. But the web has evolved since then, and so have the protocols. Websockets are supported by every major browser, and server-sent events are supported by most (with hacks for ones that don’t). Push technologies are a viable option for change propagation.
Tracker has many years’ worth of investment in polling infrastructure. It’s one of our most reliable endpoints. Our synchronization mechanism is based on event sourcing (which lends itself to polling very nicely). Every change you make to a project (including stories, comments, and tasks) increments its version and produces an event. Each time your browser polls, it asks for all the events (changes) since the most recent version your browser has seen. It then uses these to update its internal state. This is very bandwidth efficient because we only transmit changes instead of the entire state of your project, which can be very large, especially for established projects. This pattern is extremely robust even in the face of unreliable networks.
As we grow, we’re feeling the pain of polling for changes. Sending an HTTP request over SSL is actually quite involved. First, TCP goes through its handshake. Then, SSL goes through its handshake, before HTTP sends a request with full headers, and finally the server responds. Setting the Connection: keep-alive header helps, but we still have to send two full sets of headers plus body every time. Most of the time, the answer to your polling request tells your browser that it’s up-to-date. All that work just to do nothing.
We dreamed up a full rewrite in our lab, to unleash upon unwitting Trackers, but it was simply out of the question. Tracker is a tool that many people depend on, and even small amounts of down time are unacceptable. It’s also not very agile. We wanted to be able to quickly and transparently shut off this new solution, and go back to polling if something went wrong. Also, we wanted to do the authentication in our main Rails app so that we didn’t spread authentication and authorization logic around. Our one weird trick was our answer to these constraints.
Polling wouldn’t be so bad if we just knew when to ask the server for updates. However, it’s impossible to know without some message from the server. Push services can easily solve that. What we really need for our push MVP is just a message that tells your browser that it’s out of date and it should poll. And if we just made our polling interval longer, push could silently fail and your browser would still get updates (even if they were more delayed). And if the push server went down, we could even detect that and go back to regular polling intervals.
The next question was how to implement this. We evaluated many push protocols but settled on socket.io because of its mature client/server and protocol. After evaluating the node.js server implementation, we found it unable to meet our performance requirements with the number of VMs we were willing to throw at it. We instead chose to implement the server component in Go using some great third-party libraries like go-socket.io, redigo, and the gorilla toolkit. We wanted to keep our auth logic in Rails, which meant that we needed some way for our Rails app servers to communicate with an arbitrary amount of push servers. Ideally, Rails wouldn’t have to know about the push servers, and the push servers wouldn’t have to know about our Rails app servers. A publish/subscribe pattern fit best. Redis’ PUB/SUB facilities solved that perfectly. We ended up with something like this:
The process of rolling this out was relatively painless and quite fun. We’re currently getting a little under 30K max active connections. The Go-based server was a great decision. Its performance characteristics are predictable and reliable. Our MVP and feature flagged strategy meant a relatively stress-free rollout, as well. We even had to take down our push server for a day because of some network interruptions that exposed a concurrency bug. We received no tweets or support emails. We believe that nobody even noticed.
In the future, we want to leaving polling behind and move on to a fully push-based architecture. The main reason is simplicity. Having to maintain and monitor polling infrastructure as well as push infrastructure is operationally complex. It also means that bugs could show up in more places. This means that our one weird trick is really just a stepping stone towards eventually unifying polling and push.
We’re interested to hear your feedback. Feel free to comment here or email us at email@example.com.
The post One Weird Trick to Switch from Polling to Push—with Zero Downtime! appeared first on Pivotal Tracker.
For many years, folks in the Agile community have been recommending that performance reviews be eliminated from the corporate world. In 2005 while coaching at Capital One, I remember many discussions on the awfulness of performance reviews. This was really my first understanding of the depth of culture change required to be Agile.
Now, this concept of eliminating performance reviews is gaining traction outside the Agile environment. Here is a great LinkedIn Pulse post by Liz Ryan in which she explains in depth about killing performance reviews.
From her article:
A little voice in the back of my brain nagged at me: “Despite your efforts to make them more compassionate and less uncomfortable for everyone, performance reviews are stupid from the get-go, Liz!
“How does one human being get to evaluate another one, when their personalities and perspectives may be radically different?
Consider using other techniques to help with improvement efforts among your staff. Lean has Kaizen. Agile has Retrospectives.
Real Agility means that learning is inherent in the culture of an organization. Performance reviews establish extrinsic motivators for learning… and all the research points to the idea that learning is much more powerful when it is intrinsically motivated.
Consider some other tools that might help your team to work more effectively, while maintaining intrinsic motivation:
- Retrospectives (see retrospectives.com and Retr-o-mat)
- The OpenAgile Skills Matrix
- Daniel Pink’s video about motivation called “Drive”
- Setting a powerful team, product or project vision
Finally, consider that, at least in Scrum, the concept of a self-organizing, self-managing team makes it very difficult to do performance reviews. It is hard to apportion “blame” or “praise” to individuals. Each team member is dynamically deciding what to do based on the needs of the team, their own skills, and their interest. Team members are often collaborating to solve problems and get work done. Traditional roles with complex RACI definitions are melted away. Performance reviews are very difficult under these circumstances.Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
On Friday, October 2nd, Xebia organized the inaugural edition of TestWorks Conf. The conference was born out of the apparent need for a hands-on test automation conference in the Netherlands. Early on, we decided that having a high level of engagement from the participants was key in achieving this. Thus, the idea of making everything hands-on was born. Not only did we have workshops throughout the day, people also should be enabled to code along with the speakers during talks.This however posed a challenge on the logistical side of things: How to make sure that everyone has the right tooling and code available on their laptops?Constraints of a possible solution
Just getting all code on people's machines is not sufficient. As we already learned during our open kitchen events, there is always some edge case causing problems on a particular machine. In order to let participants jump straight into the essentials of the workshop, it would need to meet the following requirements:
- Take at most 10 minutes to be up and running for everyone
- Require no internet connection
- Be identical on every machine
- Not be intrusive on people's machines
The first decision we had to make was opting for local installations or installing a virtual machine. Based on the requirements, a local installation would require participants to atleast install all the software beforehand, as the list of required software is quite large (could easily take 60+ minutes). If we were to go this route, we would have to build a custom installer to make sure everyone has all the contents. Having built deployment packages for a Windows domain in the past, this takes a lot of time to get right. Especially if we would need to support multiple platforms. Going down this route, it's questionable if we could satisfy the final requirement. What happens if software we install overrides specific custom settings a user has done? Will the uninstallers revert this properly? This convinced us that using a VM was the way to go.Provisioning the virtual machine
In order to have all the contents and configuration of the VM under version control, we decided to provision it using Vagrant. This way we could easily synchronize changes in the VM between the speakers while preparing for the workshops and talks. It also posed a nice dillemma. How far will you go in automating the provisioning? Should you provision application specific settings? Or just set these by hand before exporting the VM? In the end, we decided to have a small list of manual actions:
- Importing all projects in IntelliJ, so all the required indexing is done
- Putting the relevant shortcuts on the Ubuntu launcher
- Installing required IntelliJ plugins
- Setting the desired Atom theme
So, now we have a VM, but how do we get it into everybody's hands? We could ask everyone to provision their own beforehand using Vagrant. However, this would require additional work on the Vagrant scripts (so they're robust enough to be sent out into the wild), and we would need to automate all the manual steps. Secondly, it would require everybody to actually do these preparations. What if 30 people didn't and start downloading 5ish GB simultaneously at the start of the conference? This would probably grind the Internet to a halt at the venue.
Because of this, we decided to make an export of the VM image and copy this to a USB thumbdrive, together with installers for VirtualBox for multiple platforms. Every participant would be asked as preparation to install VirtualBox, and would receive the thumbdrive when registering at the conference. The only step left would be to copy all the contents to 180 thumbdrives. No problem right?Flashing large batches of USB drives
The theory of flashing the USB drives was easy. Get some USB hubs with a lot of ports, plug all the USB drives and flash an image to all the drives. However, practice has proved different.
First of all, what filesystem should we use? Since we're striving for maximal compatibility, FAT32 would be preferred. This however was not feasible, since FAT32 has a file size limit of 4GB, and our VM grew to well over 5GB. This leaves two options: ExFAT or NTFS. ExFAT works by default on OSX and Windows, but requires an additional package to be installed under Linux. NTFS works by default under Windows and Linux, but is readonly under OSX. Since users would not have to write to the drives, NTFS seemed the best choice.
Having to format the initial drive as NTFS, we opted for using a Windows desktop. After creating the first drive, we created an image from this drive which was to be copied to all the remaining drives.
This got us to plug in 26 drives in total (13 per hub), all ready to start copying data. Only to find out that the drive letter assignment that windows does is a bit outdated
When you run out of available drive letters, you have to use a NTFS volume mount point. The software we used for cloning the USB drives (imageUSB) would not recognize these mount points as drives however, so this put a limit on the amount of drives to flash at once. When we actually flashed the drives, both hubs turned out to be faulty, disconnecting the drives at random causing the writes to fail. This lead us to spread the effort of flashing the drives over multiple people (thanks Kishen & Viktor!), as we could do less per machine.
Just copying the data is not sufficient however. We have to verify that the data which was written can be read back as such. During this verification, several USB drives turned out to be faulty. After plugging in and out a lot of drives, this was the result:
During the conference, it turned out that using the USB drives and virtual machine image (mostly) worked out as planned. There were issues, but they were manageable and usually easy to resolve. To sum up the most important points:
- Some vendors disable the CPU virtualization extensions by default in the BIOS/UEFI. These need to be enabled for the VM to work
- The USB drives were fairly slow. Using faster drives would've smoothed things out more
- A 64-bit VM does not work on a 32-bit host
- Some machines ran out of memory. The machine was configured to have 2GB, this was probably slightly too low.
- Setting up the machines was generally pain-free, but it would still be good to reserve some time at the beginning of the conference for this.
- A combination of using USB drives together with providing the entire image beforehand is probably the sweet spot
Next year we will host TestWorks Conf again. What we learned here will help us to deliver an even better hands-on experience for all participants.
As working remotely especially as a developer has become easier over time, I still see a lack of any real remote work in the Bay area. Despite a very expensive living expenses, office space, and almost zero unemployment among developers, companies rarely seem to consider remote work. I assumed as the demand for developers intensified we’d see a corresponding interest in adding remote workers to teams or even going fully remote, especially with companies like 37 Signals and others paving the way.
The typical Bay area company is looking for on-site talent despite the cost and difficulty of recruiting. With over 20 years of experience now I acknowledge it’s nice to have a team co-located, but certainly not at the expense of having a good team. There are some incidental communication and collaboration benefits of working side by side in a room with a team, but many of these same benefits can be mimicked using technology. Some of the tools that are commonplace, cheap, or free are:
- Remote Pairing Tools (ScreenHero, tmux, VNC, Skype in a pinch)
- Hosted Project Trackers (Pivotal, Trello)
- Source Control with built in Code Reviews (github, gutbucket)
- Issue Trackers (ZenDesk, Redline)
- Video Calls (Skype)
- Shared chat platforms (Slack, HipChat, Campfire)
So why are we still largely living in the working experiences of the 1990s?
Chatting with her reminded me of my recent visit to Australia. The Context Matters team had invited me to sit in on the session they sponsored at Agile Australia featuring a presentation by RMIT University’s Catherine Haugh (SPC/RTE). Even more fun, I had the opportunity to visit their development site in Melbourne to help celebrate their accomplishments (and to see the biggest Feature kanban I had yet to lay my eyes on!).
Her presentation built on the existing RMIT SAFe Case Study, adding video testimonials from the teams and the ART stakeholders, as well as some new data showing significant improvements in team and stakeholder NPS scores.
For example, in May 2014, before they launched their “Student Administration Agile Release Train” their stakeholders gave them a -90 Net Promoter Score. They launched the train in June, and by April, 2015, their NPS score had increased to +13! Now that’s measurable business results.
The update is definitely worth viewing, especially for those interested in the realm of education administration. You can view the presentation and embedded videos here.
Many thanks to Em and Catherine for keeping the community up to date with their journey.
Excited to share slides describing the latest evolution of my story (at the Toronto Agile Conference). I talk about how we may dare to create environments where Agile may flourish so we have Organizational Agility. This requires reinventing organizations. My message is really simple: If you want Breakthrough Results Cultivate Culture to Create Places People Love […]