Since we love touring and meeting our community of users, we’re setting out on the road once again, this time to more cities than ever! Over the next 6 months you’ll be able to see us and ask any questions you have, in more than 10 cities in Europe and the US.
This year, we are very excited to return to the City Tour to share with you all the news around the SonarQube platform, and show you our latest product: SonarLint, which allows developers to track quality of their code in real time as they type it. Very powerful!
Here is what will be covered at each stop of the tour:
- The Leak Approach: a new paradigm to manage Code Quality
- SonarQube 5.x series in demo
- SonarQube integration to Microsoft ALM
- SonarLint, the missing piece of the puzzle
- Customer feedback
- Sonar Analyzers and well-established standards
- Roadmap for the platform
- Roadmap for Sonar Analyzers
It will also be a great opportunity to meet other SonarQube users to share tips and tricks and discuss your experiences with the platform.
Is there something you would like to know or ask us but haven’t had the opportunity to do so? Now’s your chance! Sign up for the free event in your preferred city, and we’ll see you soon!
Registrations are open on our website, so pick the city you want, fill the form and you’ll be all set.
Join the conversation by using #SSCT2016 in all your tweets about the events.
See you soon !
The older version of this was some variant of "Extreme Programming is just hacking" or "Extreme Programming is just cowboy coding".
In essence, the suggestion is that Agile is equivalent to "Code and Fix" or "Cowboy Coding".
Kent Beck describes the heartbeat of an Extreme Programming episode in response to the "Why is XP not just hacking?" question. Paraphrasing for length...
- Pair writes next automated test case to force design decisions for new logic independent of implementation.
- Run test case to verify failure or explore unexpected success.
- Refactor existing code to enable a clean and simple implementation. Also known as "situated design".
- Make the test case work.
- Refactor new code in response to new opportunities for simplification.
Granted, not every Agile team has this kind of technical discipline. Hence, so-called Flaccid Scrum and the advocacy of Two-Star Agile fluency.
Also, granted, that sometimes one should throw stuff together quickly when the purpose of the exercise is to test an experimental concept. For example, "spiking a solution" or an initial MVP.
It's the end of an era. After nearly 10 years of updates and lessons-learned, we’ll be removing Targetprocess v.2 following next month’s build. As we’ve explained in earlier posts, we’re doing this to help accelerate our development speed and make important improvements to Targetprocess v.3.
Most of our users are already on Targetprocess 3 and won't notice any difference. For those of you still using Targetprocess 2, this change means you’ll no longer have access to old lists, dashboards, or the Targetprocess 2 interface. Time sheets and custom reports will be preserved and migrated to Targetprocess 3.
If you need help transitioning to Targetprocess v.3 and training users, please do not hesitate to schedule a “Migration to Targetprocess 3” workshop.
Our last Targetprocess 2-supported build will be released in May 2016. All updates after this will not be compatible with v.2. If you absolutely don’t want to lose access to Targetprocess 2, let us know via email@example.com. We don’t recommend this option, but it’s your choice and we’ll take steps to help you keep your access to v.2. In this case:
For On-Demand accounts: We’ll move your account to a separate environment where you can continue working off of Targetprocess 2, but will no longer receive updates or new features.
For On-Site accounts: You won’t be able to upgrade your Targetprocess instance with any build released after May 2016. You can keep using Targetprocess 2, but with no new features or updates.
We’ve had some great times with Targetprocess 2, and we’ll always look back at it fondly. If you want to read the whole story behind the product, take a look at our company chronicles (2004-2014).
Let us know if you have any questions or concerns about this change. If you have any fond memories of v.2, we’d love to hear them in the comments. Now, it’s time for us to move on and look ahead to the future of Targetprocess. We hope to see fans of Targetprocess 2 there as well.
This is a popular variant of "Agile is fundamentally just..." and is similar to "Agile is just for programmers", differing only in not wanting to identify with "Agile".
Let's work through the reasoning.
When building something effectively, how do you know what is right?
In order to know what is right, you need to understand both the problem space (What problem are we trying to solve? What forces are in play? etc.) and the solution space (What options do we have? What trade-offs are in play? etc.).
Nominally developers have the best understanding and insight into the solution space. "Let the developers do what they think is right" implies that understanding the solution space magically means that you understand the problem space. This doesn't make any logical sense.
One-way communication from developers to product owners / managers is as illogical as one-way communication from product owners / managers to developers.
Bringing perspectives together in order to gain understanding and insight of both problem and solution space does make logical sense.
Doing what you think is right is ineffective if you have no justifiable reason for those beliefs.
See also Lean Startup.
Doing Agile and Being Agile are different. Here is a popular infographic that explains what Agile really is and illustrates common misunderstandings about it: Doing Agile Doing Agile is about the practices: standups, user stories, iterations, etc. There are significant benefits from using Agile practices – I see it as “common sense” of getting work done. […]
Oftentimes you find yourself wanting to create a directory with multiple subdirectories. For example, when creating Ansible roles I almost typically always create the role and three to five different subdirectories underneath it.
This is actually quite easy to do on any posix compliant shell (although I have only tried bash and zsh).
This will yield the following directory structure.
Pretty simple but incredibly handy. This also works for many different arbitrary shell commands, give it a try!
On January 5, 2016, an entirely new version of the Scaled Agile Framework (SAFe) was released to the general public. This was no minor incremental release but actually a large batch with some fundamental changes and improvements.
In past releases of SAFe there was always a combination of minor role refinement, terminology change, and/or revised guidance. Sometimes there were minor look and feel changes on the public website. However, this time a major evolution has occurred. The framework is now more easily scalable to thousands or tens of thousands of Agile practitioners working towards a common set of solutions that require a different level of thinking, coordination, and adoption patterns.
The Journey to V4
To understand the magnitude of this release, the words “thousands” and “tens of thousands” are significant. In SAFe v3 and prior versions, scaling was more aimed toward a smaller scale, offering guidance for only hundreds of practitioners. The previous iteration of the SAFe Big Picture organized the program as well the key roles executing various aspects of it like this:
It was more common to see one Value-Stream (VS) with a single Agile Release Train (ART). As the VS expanded in size due to Agile adoption, you could see additional ARTs form. Because each ART was around 50-125 practitioners, the math worked out. It got a little tricky coordinating multiple ARTs within a large VS, but it wasn’t unreasonable. For small and medium enterprises doing work largely based upon Scrum and eXtreme Programing (XP) concepts, it was a good start for the most part.
However, larger and larger enterprises were starting to realize the value of this framework – while still recognizing that traditional development techniques like Waterfall weren’t going to immediately stop. In addition, many existing enterprises embraced Kanban at many levels in addition to Scrum. The problems were that these weren’t really explicitly called out in v3. Sure, it was implied – and if you dig hard enough you may find an abstract or blog describing an approach or two or some ideas but they weren’t really codified within the framework. So what did enterprise transformation consultants like myself do? Well, many of us experimented with different adoption patterns to fit our client needs. If you are or have been an Enterprise Agile Transformation Coach, you may recognize some of these design patterns used:
- Developing a Portfolio of Portfolios to aggregate strategy, budgets, and forecasting
- Inventing new roles to assist in multi-ART coordination built on existing Program level roles
- Defining new ceremonial cadences to synchronize between Value Streams, ARTs, or traditional Waterfall
- Identifying new artifacts to align all of these new efforts together
- Adding flexibility in the planning or execution of work through flow-based pull models such as Kanban
If any of these sound familiar, then you’re not alone. In the end, the SAFe framework authors recognized the same needs. Finding the common best patterns for this evolution required leveraging the experiences, feedback, and enhancement requests from a diverse set of Agile ecosystems such as:
- Existing SAFe Program Consultants (SPCs) in the field
- Larger and larger private and public enterprises as well as government institutions
- Social communities and existing Business Partners
- Embedded software/hardware customers
With all of this input and the merging of experimental framework derivatives such as SAFe LSE, the next version of the Scaled Agile Framework was born. To clearly indicate the transformation of the transformation program, even the name was changed to SAFe 4.0 for Lean Software and Systems Engineering . The SAFe Big Picture, naturally required an overhaul:
What are the Major Changes in SAFe 4.0?
Now that we’ve covered the biggest superficial difference, I’d like to briefly describe the major differences between SAFe 4.0 and the previous version(s). In my opinion, there are five major differences:1. New Value Stream Layer
Perhaps the most significant change in the SAFe update is the addition of a Value Stream layer. This optional layer provides additional cadence, synchronization, and guidance, along with new roles and the codified ability to scale to tens of thousands of Agile practitioners working towards a common set of solutions.
The concepts of Solution Intent and Solution Context are called out in this layer. Why? The initial Solution Intent may require multiple solution contexts to consider and deploy as part of the original intent. Platforms or sets of solution providers are simplified examples of this.
Systems Engineering concepts such as set-based design (as highlighted in SAFe Principle #3 “Assume variability; preserve options”) are called out as important things to consider when envisioning the Solution Intent.
Model-Based Systems Engineering – the paradigm that emphasizes visual models for simulation, design, and communication – and the principles that guide this practice are now called out. We used to know this by a different name – Agile Modelling – but it doesn’t exactly have to be Agile per say. It does have to be Lean and just enough to work with external solution providers that may require a minimal level of formalized specification.
Finally, we have the concept of Supplier(s) and Customer(s). This is huge. Suppliers could be internal within a large enterprise – such as business or technical organizational units – or it could be external in terms of parts suppliers or large software/hardware/firmware 3rd parties. The role of Customer is a new called-out addition as well. Agile purists may balk at this with statements like “That’s what a Product Manager/Product Owner represents.” However, that is a misconception. Customers can be indirect (internal facing or proxied by a Product Manager/Product Owner) or they can be truly external to the Enterprise – such as the receiver of the system(s) (e.g. A government institution, or a hardware manufacturer like Apple, Samsung, or Microsoft).
(Stay tuned for a much deeper dive into the SAFe Value Stream level more specific to large organizations.)2. Kanban Upgraded to First-Class Citizen
Kanban as a flow-based pull mechanism now exists at all levels of the Framework. Kanban systems can be connected together based upon trigger points in the state transition models.3. New Foundation Layer
SAFe rests upon a foundation of Leadership, Core Values, and a Lean|Agile mindset. The Foundation layer incorporates guidance around Communities of Practice, the SAFe House of Lean, SAFe Principles, and consistent implementation patterns.4. Enablers
Gone are the separated “Architectural Epics, Features, Stories” as planning work items. They have evolved into what they truly are: technology enablers of business capabilities. Enablers can be subset into Architecture, Infrastructure, or Exploration items. As a result separate Kanban systems are not required between Enablers and Business.5. Enterprise
SAFe 4.0 calls out fact that a SAFe Portfolio is but a slice of or part of a larger enterprise organization all guided or constrained with a common governance, strategy, and funding mechanisms. In other words, each SAFe implementation program (mapped out in the Big Picture) could be just one portfolio in an Enterprise, which may have several portfolios each with its own SAFe program in place. This is potentially where larger enterprises will get the most value out of SAFe 4.0.
This is just a brief summary of how I myself, as a SAFe Program Consultant Trainer see the big changes in the newest version of the Scaled Agile Framework. Stay tuned for future blogs where I dive into these changes in more detail and why or why not you should adopt them for your organization as part of your Agile transformation journey.
For more information about the Scaled Agile Framework and SAFe 4.0, check out the website.
Best known for being a global leader in navigation and mapping products, TomTom is also the mapping provider for Apple Maps, and the maps and traffic data provider for Uber drivers in over 300 cities worldwide.
An interesting thing about TomTom’s SAFe adoption is that it started in 2012, which gives us a view into a fast-growing company that has worked within the Framework over a five-year period. There were some early wins (see quote below), as well as challenges and learnings, which they summarize in detail as the “Good,” the Bad,” and the “Ugly.”
“There is no doubt in my mind that without SAFe and Rally we would not have launched this in only 140 days. It is also our best new product ever.”
Re: TomTom GO500 sold in 45 countries
Before adopting SAFe, TomTom’s challenges had a familiar ring:
- Organized as waterfall projects
- Many projects working in all parts of the code with minimal module or component ownership
- Many releases are months-quarters late
- Multiple code lines and branches
- Negligible automated testing & no continuous integration
- “downstream” teams spend 3,4,5 months accepting the code and often changing it
- Poor visibility and facts-based decision-making
Once they decided to adopt the Framework they got it right from the start, providing SAFe training for their CTO, SVPs, and 50 CSMs and CPOs. Today, SAFe is practiced by all of TomTom’s large product teams representing navigation software, online services, map creation and sports software. That represents approximately 750 FTEs, with 200+ trained and certified in SAFe.
Some of the things I do on a regular basis is to facilitate workshops and events of different kinds - open space being one of them. I'd like to share with you a small trick-of-the-trade I have evolved while facilitating those: the Idea Mingle.
I really like the energy the open space format brings to the discussions. People are familiar with the format of participating in a discussion, and also shy or introvert people are given room to participate in a comfortable way.
However I have sometimes found that the initial part can be a little bit slow, the part where you pitch topics for discussion. This is seldom a problem with groups that are used to open space. Everybody involved understands what "level of ambition" is needed for posting a topic - acknowledging the fact that the really interesting part will be the discussion that unfolds around the topic. However, with groups unused to open space I have found that people can get shy. Posting a subject might seem pretentious: "what have I that everybody else should find interesting". So, when asked at point blanc "what do you think should be discussed", their mind goes blank.
To give the "pitching phase" of open space a soft start I do a "Idea Mingle" that gives people a chance to evolve their interests into discussion topics before it is time to post them. It is a "mingle party" with the purpose of distilling ideas for discussion, thus "Idea Mingle".
The instructions I give are simple:
- Look around the room and lock eye contact with someone you have not yet spoken to today.
- On my call, pair up
- Introduce yourself by saying "Hi, my name is …" to each other
- Continue by saying "I think it would be interesting to discuss something around …" and fill in a field. It does not need to be specific, it can be broad or vague, e g "something about testing".
- Have a short discussion digging a little bit into each others interests
- After two minutes I will break the discussion
The basic idea is that the one-on-one-format is a more comfortable format for talking about an idea than to do it in public. Speaking to a stranger is still an uncomfortable situation for a lot of people, me included. However, the short format and the very focused scope makes it managable - as opposed to the usual mingle setting "find a stranger, talk to him/her, and be nice and interesting for an undefined period of time". Just the thought gives me creeps.
Now, after the first one-on-one, all participants have bounced one of their interests on another participant. Almost certainly they have received acknowledgment that "it is an interesting field" and most have probably got feedback of the form "testing is interesting; I myself is interesting in how to use automation to make it easier".
OK, so people have now got a slightly refined idea about an interesting discussion. But, for most of them it is not yet ready to publish as a open-space-discussion-subject.
Allow me a slight detour.
At a workshop class with Lyssa Adkins and Leslie Stein there was a small knowledge-sharing exercise. During that exercise I was given the task to explain "sprint retrospective" to a few of the other participants. I was given literally no time to prepare, and my presentation time was limited to five minutes (if I remember correctly). It was really awkward.
Immediately after the first group of listeners, I was sent a second group of a few class-mates, and had to explain to them. This time it was less awkward. A third group was sent to me, and this time I basically knew what I should say or not. The fourth time, the explanation went smooth. So, in four iteration a very awkward rambling refined into a pretty crisp micro-presentation.
Back to open space and Idea Mingle.
The next instruction for Idea Mingle is: Now, lock eye-contact with someone else. And, we do the same thing once again.
At the end of this second one-on-one the idea might have evolved to "automation in testing, and the trouble with databases".
And then we repeat the one-on-one a total of four times.
At this point of time, many of the participants have received acknowledgement that they are interested in a field others also are interested in. And, a lot of them have refined an initial rough idea into something that is a more specific topic, e g "How to involve DBAs in test automation" - a topic ready for discussion.
And we are ready to form a queue for pitching topics to discuss.
Also, everyone is a little bit frustrated over that each one-on-one was interrupted just when they were getting started. So, all participants are eager to start the open space discussions.
Sometimes you find yourself in a weird predicament. A third party application that you’ve slapped nginx in front of insists on using internal IP addresses or ports despite your reverse proxy passing all the correct headers and other pieces required. Or maybe you’ve found yourself in the situation I found myself in last week where you have a third party internal application wanting to reference a file on a CDN. As luck would have it, that CDN has been deprecated and changing it requires rebuilding a jar where the script path is hardcoded and then repackaging the internal application that uses it. Long story short, we’ll soon be doing some yak shaving that could possibly take all day so how about we just use sub_filter instead and take a nap?Use It
Thankfully most stock builds of nginx include ngx_http_sub_module by default. Using it is actually simple enough, just slap the following directive under your location or server directive in nginx.conf.
And that’s it! You can also use regular expression here as well. sub_filter_once indicates whether nginx should apply the replacement once or repeatedly. One gotcha you might have is that by default this only works with mime types of text/html. If you want to modify the mime-types that subfilter executes on set `sub_filter_types` to the desired type.Give It a Try
I’ve put together a very simple demonstration of using subfilter for text replacement on Github.
An Exercise in Estimation: How many times can you fold a piece of paper in half & half again...
I do this exercise when beginning scrum teams start story estimation or task estimation. While this exercise has a unique twist that is very different than task estimation or story estimation - very few people foresee this aspect of the exercise, so it adds to the ah-ha moment.
Start by giving everyone a sheet of typical paper (8.5 x 11 in the USA - although the size just doesn't matter). Then tell them the exercise but ask that no one do any thing yet. First we will estimate. The task is to estimate how many times you could fold the paper in half and then again in half and repeat... without doing it what's your estimate of the number of folds?
Ask people to call out their estimate, write then on a board in no particular order or fashion.
Typical groups come up with estimate in the range of 5 - 20 folds.
If you want to do math... calculate an average estimate... or just circle the mean value.
Next have the group fold the paper in half and half again up to 4 times - then STOP and estimate again. Same as last time - call out the estimates and write them down on the board.
Next - fold the paper until you are done. How many folds did you get?
Now the debrief: What did you learn in this exercise? What happened to the estimates - why did this happen? What generalizations of estimating can we learn from this example? So when do we practice this re-estimation technique in Scrum?
For BONUS points - how many times do you need to fold paper to get to the Moon?
How Folding Paper Can Get You to the Moon
MythBusters episode: Folding a large piece of Paper in Half - What's the Limit
I regularly get questions on agile retrospectives, which I'm more than happy to answer. In this blog post I'll discuss the question that I got from someone who attended one of my workshops on valuable agile retrospectives. He was planning a retrospective with a new team, and wanted my advice on which exercise to use and how to facilitate the retrospective. Continue reading →
- the value flow (cash flow, positive and negative) through the useful life of the work item
- the change in cumulative value (Net Present Value, NPV) as a function of time,
- the Cost of Delay profile (how much business value is lost as a function of the delay), and
- the Urgency profile (the rate at which value is lost as a function of the delay)
For the type of work item that was considered in part 1 (a product feature in a time-limited competitive market), here are the four curves: cash flow, cumulative value, cost of delay (as a function of the delay), and urgency (as a function of the delay) ...
Cash Flow, Cumulative NPV, Cost of Delay and Urgency
for a time-sensitive feature in competitive market
(click on image for more detail)This feature shows a diminishing rate of cost of delay (urgency), due to the twin effects of a reduced peak in earnings and reduced period of earning, the longer the feature is delayed.
What if we were examining a different type of work item which was estimated to save a certain amount of work each week, work which is currently being contracted out to external staff? In other words the same savings would occur every week for the foreseeable life of the product. Here is an estimated projection for the 4 curves in this case ...
Cash Flow, Cumulative NPV, Cost of Delay and Urgencyfor a feature providing constant benefit for a period of time
In this case the cumulative NPV is more or less a straight line (bending downwards slightly due to the present value discount), and it results in a CoD profile which is also more or less a straight line with the same gradient (bending upwards slightly). Straight line CoD profiles result in constant urgency which we can see (approximately) in the final graph in the series.
Different again - what about an item that would save a penalty fine from a regulator if a certain issue is not addressed by a fixed date? Here are the curves ...
Cash Flow, Cumulative NPV, Cost of Delay and Urgencyfor a feature providing step-function in benefit at a fixed date
This work item displays a sudden step-function in cumulative NPV at the point the fine would be applied, and a similar step-function in the CoD about 10 weeks before the date of the fine, since development Lead Time is estimated to be 10 weeks. The urgency profile is a spike - no urgency up to the "last responsible moment" when work must start, and no urgency after this point since you would then have passed the "first irresponsible moment"; there is no avoiding the fine after that point! In reality the CoD and Urgency profiles should be smoother since there is uncertainty in the estimate, and leaving it to the last moment increases the risk of incurring higher costs in order to hit the date, or indeed of missing the date due to unforeseen circumstances.
Finally consider the case where the savings of staff (similar to the second scenario above) would not start until a fixed date. Here they are ...
Cash Flow, Cumulative NPV, Cost of Delay and Urgencyfor a feature providing constant benefit for a period beginning at a fixed date
We can see this case effectively combines the previous two, with a period of low or negative CoD, followed by approximately linear CoD up to the end of the opportunity.
We have taken some time here to look at the 4 curves (Cash Flow, Cumulative NPV, Cost of Delay and Urgency) for these 4 different types of feature because it is easy to confuse between them. In the case of the "constant benefit" item, the Cumulative NPV and CoD look almost identical. This has caused some confusion and some inaccurate statements about the use of CoD. Take care!
One of the observations to make about the graphs shown so far is that to estimate and derive them for real features would be difficult and error-prone. While this is true, one should not conclude from it that we should therefore estimate a completely different entitiy, which is easier but not well correlated with the scheduling decisions we wish to make! (Sadly, you may also come across some advice like that.)
However it does suggest that looking at archetype profiles for different types of work item may be helpful. Kanban  for example, defines 4 archetypes for CoD which are typically used to define different Classes of Service. Black Swan Farming  also suggests some archetypes. The Kanban archetypes do not correspond exactly to the types of feature discussed above, though there is some overlap.
Kanban's Cost of Delay Archetypes, from Essential Kanban CondensedThe archetypes show 4 CoD profiles:
- Expedite items are very urgent (high CoD per week) and there is no end in sight to the cost - if you wait the losses don't come to an end. It's a straightforward decision - do it now!
- The Fixed Date items also have high impact but only if you miss the deadline. The scheduling imperative here is to make sure you start before the last responsible moment and deliver before the deadline.
- The Standard profile is approximately linear to start with and tails off as the opportunity loses value. Standard items should therefore be done as soon as possible and scheduled relative to each other according to the degree of urgency and the item's size (see later discussion of WSJF).
- Finally, Intangible items have an apparently low urgency. One might ask why do them? Two reasons. The intangible profile does indicate a rise in urgency - possibly a steep rise - will happen in the future. It is useful to make some progress on these items even though the impact in the short term is likely to be low. In addition having some items in the schedule which are "interruptible" makes the system more resilience in the event of expedite items having to be handled, or events which threaten the service level agreement for standard items.
Armed with this information about CoD and urgency profiles, we can now move forward to consider the WSJF method itself. To use it we need information about the urgency, the urgency profile and the duration will be taken by implementation of the work item.
This is considered in the next blog in this series.
Read part 3 now: How to calculate WSJF
Back to part 1: Understanding Cost of Delay and its Use in Kanban
A big thank you to everyone who took part in our recent mobile app user survey! After analyzing your feedback, we identified some key areas to focus on. We’ve already implemented some of your suggested improvements, and we’re working hard on the rest.
We’d also like to refine the overall user experience of the app, so stay tuned for more updates. You can download the app for free from either the App Store or Google Play. Check out our latest changes below, and let us know what you think.
Download the iOS app
Download the Android app
Our latest improvements:
- Added the ability to login via Single Sign On (must be activated for the web version first).
- Views that you have hidden in our main web-based application will now be removed from the mobile app. This will help to declutter the left menu, and will better connect your personal customizations to the mobile version.
- Dashboards and reports have been temporarily removed from the left menu until we find a better way to display and support them.
- You can now refresh the left menu and boards’ setup by pulling down on the left menu with your finger (pull-down refresh).
- Added the possibility to open links to entities directly in the app (i.e. links to tasks, user stories and other types of cards can now be opened in-app).
- Improved caching for views: any changes made to a view will now be retained after it is reopened.
- Fixed a bug involving incorrect setting of Release and Iteration fields. Releases and iterations will now be filtered correctly at all times.
We’ve also enabled push notifications. Never miss an important work item again -- unless you want to of course. You’ll receive notifications on your phone’s home screen when you are assigned or unassigned to an entity, when the state of an item you are assigned to changes (e.g. from “in progress” to “done”), and when you are mentioned or replied to in an entity’s comments.
What we’re working on right now:
- Improving real-time updates for boards.
- Storing all notifications in one place within the app.
- Implementing @mentions in comments.
Targetprocess iOS App:
Our latest improvements:
- Added support for @mentions in comments.
- Added the ability to log in via Single Sign On (must be activated for the web version first).
- Added the possibility to open links to entities directly in the app (i.e. links to tasks, user stories and other types of cards can now be opened in-app).
- Added the possibility to save images attached to entities in Targetprocess to the photo gallery of your iPhone or iPad.
- Improved real-time updates for boards.
- You can now log time spent and remaining time directly from an entity's description.
- Adding a Quick Add button for effortlessly adding new entities. This button will always be available from the bottom panel.
We’re also looking at improving general navigation and re-working the way entities and boards look in the app. Here is a preview of the would-be board view, user story view, and interface for comments:
If you feel like beta-testing these new mobile functionalities or trying out our new concept for boards, let us know! You can send your feedback, ideas and suggestions to firstname.lastname@example.org.