Liquidsoftware https://liquidsoftware.com/ JFrog Webite Wed, 09 Nov 2022 13:21:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 https://cdn.speedsize.com/08612fe1-9391-4cf3-ac1a-6dd49c36b276/https://liquidsoftware.com/wp-content/uploads/2018/04/LS_Favicon.png/w_32 Liquidsoftware https://liquidsoftware.com/ 32 32 The Seven Deadly Sins of Versioning (Part 4): REST API Versioning https://liquidsoftware.com/blog/the-seven-deadly-sins-of-versioning-part-4-rest-api-versioning/ https://liquidsoftware.com/blog/the-seven-deadly-sins-of-versioning-part-4-rest-api-versioning/#respond Thu, 28 Feb 2019 11:23:18 +0000 https://liquidsoftware.com/?p=1306 The Rise of the Humans In the book Liquid Software: How to Achieve Trusted Continuous Updates in the DevOps World, there’s a great deal of discussion about the rise of the machines, and how the need for speed in today’s (and tomorrow’s) software development environment demands that we automate as many processes as is practical …

The Seven Deadly Sins of Versioning (Part 4): REST API Versioning Read More »

The post The Seven Deadly Sins of Versioning (Part 4): REST API Versioning appeared first on Liquidsoftware.

]]>
The Rise of the Humans

In the book Liquid Software: How to Achieve Trusted Continuous Updates in the DevOps World, there’s a great deal of discussion about the rise of the machines, and how the need for speed in today’s (and tomorrow’s) software development environment demands that we automate as many processes as is practical and sensible. These automations allow us to design and refine more and better software. They create business efficiencies, including cost savings and savings on resources – including human. By freeing software development personnel of numerous tedious and repetitive tasks, liquid software lets them focus their attentions on software innovations, improving software delivery and upgrade systems, and certain detail-oriented operations that humans are generally best to handle.

That brings us to REST APIs, which are how we interact with software services. REST APIs, typically comprised of modules, functions, and input/output parameters, are usually a collection of entry points. For each entry point, there are different commands that have different input and output data. Entry points are root pathways to access descriptions about a particular REST API, its functionality, and the URL by which it can be accessed. When we’re talking about REST API entry points, we’re talking about its REST API function name.

For maximum versatility, REST APIs should be both forward and backward compatible. From the client perspective, forward compatibility means an older client can still work with a more recent REST API version; backward compatibility means that a more recent client can still work with an older REST API version. This said, when upgrading a REST API, there are cases when breaking its forward compatibility is desirable as a means of forcing customers to upgrade their clients.

It’s also significant to recognize that when an entry point is changed, a function name is being changed as well. If an entry point is removed, then backward compatibility is broken. The thing that most influences backward and forward compatibility is REST API entry point management. For example, if one were to change the structure of input/output data, it would be very bad practice to use the same REST API entry point that existed before this structural change. In this instance, good REST API housekeeping would necessitate the creation of a new entry point. And in so doing, it’s important to assign the entry point a new name (i.e., a new name for a new function) or, at minimum, place the REST API version number in front of an existing name.

SemVer and REST APIs

One of the best ways to create a software release is to use semantic versioning (SemVer), which features a major.minor.patch versioning number scheme. In most instances, there should be a direct interrelationship between a given piece of software’s SemVer version and REST API versioning. Being mindful that software will change far more often than REST APIs, there are some best practices worth highlighting.

These practices are predicated on an understanding of the factors that impact this relationship (e.g. the types of changes that are being made to a particular REST API). We would certainly expect there to be a software version change every time there is a REST API version change. Conversely, the global version of a REST API is usually the global version of the service to which it is exposed. Accordingly, a first, best practice is to make sure that the global version of both are identical (i.e., if Service “A” is v2.1.1, then the REST API version is exposing v2.1.1).

With REST APIs, there are two levels of versioning:

  • Overall
    For example, in v1 of a given REST API, all calls would be prefixed with https://…/api/v1/…; in v2, all calls would be prefixed with https://…/api/v2/…
  • Internal for each endpoint within a REST API
    Modifying an existing endpoint (e.g., adding or removing parameters) or adding a new one does not necessarily imply that a change is required for an overall REST API version.

REST API Versioning Sins

When examining the question of REST API versioning, be on the lookout for a number of issues:

  1. No versioning at all
    It’s quite uncommon for REST API entry points to include any versioning. This results in problems, including clients not using new parameters – since they don’t know they exist – and clients communicating parameters which are no longer in use (perhaps being supplanted by default values), since they have been removed.
  2. Inappropriate Version Changes
    Even the smallest change to a REST API should generate, at least, a minor version increase (a patch increase is not acceptable). For example, if data fields are added to an existing command, or a new command is added to an existing endpoint, these are typically considered to be minor REST API changes. The application version should, at minimum, be upgraded from x.y.z to x.y+1.z (once again, upgrading to x.y.z+1 is insufficient).Note, however, that when a minor SemVer or service implementation change occurs, it may or may not generate a change in a given REST API, and any changes that occur in that REST API won’t require a full, new entry point. It will only need a new path or new data input/output reference to the existing REST API.A big change, such as exposing a new design architecture for a REST API, should always be considered a major SemVer change. Still, let’s consider whether, when a decision is made to designate a given REST API change as a major SemVer change, the same version number should be used for both the REST API and the service implementation.JFrog’s Artifactory opts against having REST API entry point versions. Instead, every time there is a major change, a new command is added, along with a new path to the REST API. The old path is then gradually deprecated. With this method, there is no versioning for the API path (i.e., there is no version number in the REST API’s URL). This practice presupposes that insistence on entry point names always containing the major version of a service is an artificial and inefficient constraint. On the other hand, this approach can be problematic when there is a need to execute major structural, command and service implementation changes because that requires restructuring one’s REST API in the process.Another option is to create a new version for a given REST API and restructure the command under them. This major version increase in the REST API doesn’t have to be related to the SemVer version. Consequently, REST API v1 can work with v1.0, v2.0, and v3.0, while REST API v2 will pertain to v4.0 and above.
  3. Handshaking
    If a proper handshake between clients and REST APIs is not established, it becomes difficult, if not impossible, for each side to confirm the other’s version and establish compatibility between the two.
    When it is known that certain changes will result in failed connections to a given service, the problem can be avoided/resolved by providing clients with the new code to execute necessary handshakes. Alternatively, developers can create REST API documentation with clear definitions and a compatibility matrix for the different versions of existing services and the several REST APIs that pertain to them.
  4. Maintaining Multiple REST API Versions
    When upgrading an overall REST API version, there’s usually a need to sustain and support (at least for a limited period of time) the old version. Problems can be avoided by making sure that endpoints in the old version are clearly mapped to corresponding endpoints in the new version, and that a compatibility matrix is maintained for each endpoint (although this can become quite complex as the REST API grows). Additionally, new features should only be added to the new REST API version. This will gradually encourage customers to upgrade.

The proper design of REST APIs is one of the most important things a software engineer can do in relation to each and every exposed service. And making sure that REST APIs are versioned correctly is a distinctly human endeavor. As indicated, the most critical concern relates to how one precisely links the manual functions of designing, and versioning REST APIs to a SemVer mechanism associated with software generation.

While automation has done much to increase the productivity of software development and reduce the number of hours necessary to attend to non-innovative chores, there’s just no way around it – REST API management is our responsibility. The work demands care, dedication, and focused attention. When we do it right, we reap the praise of industry bosses – and users, alike.

The post The Seven Deadly Sins of Versioning (Part 4): REST API Versioning appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/the-seven-deadly-sins-of-versioning-part-4-rest-api-versioning/feed/ 0
The Seven Deadly Sins of Versioning (Part 3): Versions in the Code https://liquidsoftware.com/blog/the-seven-deadly-sins-of-versioning-part-3-versions-in-the-code/ https://liquidsoftware.com/blog/the-seven-deadly-sins-of-versioning-part-3-versions-in-the-code/#respond Wed, 23 Jan 2019 11:52:28 +0000 https://liquidsoftware.com/?p=1285 Versions vs. Versioning in the Code Versioning has existed ever since software began. And for a half century, the process of creating binaries has required the existence of version information. For most of that time, software building was on the right, evolutionary track. Systems would ask for a version declaration at the time of the …

The Seven Deadly Sins of Versioning (Part 3): Versions in the Code Read More »

The post The Seven Deadly Sins of Versioning (Part 3): Versions in the Code appeared first on Liquidsoftware.

]]>
Versions vs. Versioning in the Code

Versioning has existed ever since software began. And for a half century, the process of creating binaries has required the existence of version information. For most of that time, software building was on the right, evolutionary track. Systems would ask for a version declaration at the time of the build, but this would not be included in the code base.

Around the turn of the millennium, with the fast adoption of version control systems (VCS), a bad practice crept into development where build systems were becoming solely reliant on code information. As updates, patches, bug fixes, and software upgrades started coming faster and in even greater numbers the habit of placing these version text files inside the code continued without much industry reflection on just how dreadful an idea this really is.

Package managers have grappled this issue. And although some have simply reinforced the problem, others have been more solution-oriented, while still others have made things a bit of a muddle. Early package managers, such as RPM and Debian, didn’t actively contribute to this problem because they offered no option to place version information in the code. However, from 2005 forward, newer package managers, such as Maven, Ruby, and npm, forced developers into this practice by automating its execution. In more recent years, as developers are working with expanded build environments and the rise of semantic versioning (SemVer), we’ve seen different tools – including Gradle, Docker, and Go – that have taken strides to address this issue in that Gradle allows for this option, while the latter two make it impossible.

And the difference between having versions in the code and not is dramatic.

Simply put, having versions in one’s code is bad. It requires the existence of a file within a code base that identifies a particular version number. Anytime one needs to change a version number, it necessitates a change in the code base. On the other hand, versioning as a process is substantively different. Here, we have a tool inside of one’s code that’s capable of appropriately updating the version number with each change made to the software. However, in this environment, the version identification information is not stored in the code itself. What is in the code is information regarding the process by which versioning will be executed. In other words, what’s being coded is the type of versioning that’s desired (i.e., the versioning process), but not the version number.

When versions are not a part of one’s code base, one can identify any point in that code base and create a branch from it. Otherwise, when the version is inside a given code base, then each version generates a code change, as one code base would have to have been altered merely for the sake of creating a new software binary version. The problem with this type of new binary is that it isn’t immediately evident whether the code change that was made to create the new version is a real change; all that can be known is that this version represents a change.

Whether we are dealing with major upgrades or minor patches, we always want to create new versions. What’s fundamentally at issue is where this version information is to be stored.

How It Has Been Done and Ways of Doing It Better

Many developers, while continuing to place versions in the code, have implemented workarounds that make the version information that’s embedded in the code dynamic. In this manner, the file describing a particular package that contains the version information still exists, but the version part is dynamic. A script kicks in when building out from this version, and it automatically changes the parameters related to the build process.

However, this merely swaps out the need to manually change the code base to create a new version number and makes the process automatic. While the script is changing the code for the purpose of versioning, it isn’t placing the new version in the VCS. Rather, the script is placing a space between the VCS and the build system. So, at the end of this process, we still don’t have the code base that created the system.

Gradle represents a great advancement in versioning. It allows one to add things as part of a normal build process without the necessity to change the code to generate a new version number. At its core, Gradle separates static information from the dynamic process of creating a binary. The input regarding the calculation of a version becomes part of the declaration of the module and the environment. Gradle is a non-opinionated build automation system. It permits making any parameter dynamic when one wants it to be to be dynamic or static as desired. And if a developer is still inclined to have their version in the code, Gradle can do this.

For all its innovativeness, though, Gradle is so flexible that it offers developers all the rope they need to hang themselves.

Docker is also quite flexible, although it’s opinionated as regards versions of the images that one creates. Docker differs from Gradle in that the version is not part of the Dockerfile at all. This allows use of the same file and the same code base to create multiple versions without the need to change anything in the VCS. A Docker build creates versions only from the dynamic parameters that are given to it through the build process itself. This still leaves build developers to find a versioning methodology to be used as part of their continuous build and integration processes. In Part 1 and Part 2 of of our 7 Deadly Sins of Versioning blog series, we discussed best practices for SemVer, patch numbering, and hash versioning, which we believe will help to reinforce good versioning methodologies.

Go modules (since Go 1.11) adds more constraints and forces more good behaviors related to versioning, branching, the relationship between versions, Git, Git branching, APIs, major version changes, and so on. A notable innovation is that it requires compilation of source code into a full executable. The design of the Go modules is such that its go.mod files (which are part of the source code) will not contain version information. Versions are created from one’s build command and build environment. Go is an opinionated and solid solution for versioning. Nevertheless, it has created some real world conflicts because it just doesn’t take into account that there are a variety of “religions” to which developers adhere when it comes to naming their versions and creating their Git tags and branches.

The Garbageman Cometh

It’s a fact that the vast majority of all the binaries that are created are never released and simply accumulate in software junk piles. In fact, at JFrog, we like to think of ourselves, in a good way, as software garbage collectors. As developers create more and more software, there will necessarily be more and more versions. And with all of this there will be more and more rubbish. It’s just the nature of our trade. Someone has to collect unusable/failed versions, tag them, and know how to throw them away so they don’t clutter software/binary creation systems.

In years past, this work was handled by human beings. Nowadays, JFrog’s binary repository manager, Artifactory, automates garbage collection, sorting, and disposal efficiently, transparently and, most importantly without pausing the development process.

Garbage collection also raises the issue of sequential gaps in versioning and why they are to be encouraged in our modern world of development. If it’s a given that most binaries will never go into production or distribution, but they will have version numbers, then it’s logical to have “version holes.” In other words, we might have a perfectly useful piece of software, v2.1.2, whose next useful version that’s ready for prime time will be v2.1.5, and the next 2.1.8. The gaps are an explicit admission that software development takes time and, yes, mistakes along the way are just part of the process. Most companies don’t like sequential gaps because they feel like it’s publicly exposing what they perceive as a weakness. However, most users already intuit, if they don’t outright know (through experience, if nothing else) that not all software versions are perfect. More to the point, most users don’t even pay any attention to software version numbers, so why companies should be so uptight about version holes is a bit of a mystery.

The Biggest Problem

There have been great improvements in version management. But it may well be that what we’re waiting for isn’t the next great innovation to come down from on high, so much as the need for open source developers to be, well…less lazy. Many treat version management as an afterthought, at best. Yet it’s in their own best interest to focus more attention on the issue, as advances in this arena will make their professional lives better and more productive. Simply stated. And it doesn’t need to be any more complicated than remembering that the goal of a good versioning system should always be to clearly identify versions and to keep the generation of new versions flowing. Using SemVer and hash versioning correctly, as described in Part 1 and Part 2 of this series, will help those involved in liquid software pipeline programming and production to quickly and efficiently clear away bad versions, and continuously create full stack applications that are stable and reliable.

Changes today are mostly being driven by passive exposure to new tools and procedures, as opposed to pro-active lobbying for particular modernized standards. Today’s developers are, out of necessity, acquainted with a variety of package managers. This is because software components are created by a variety of developers with each using his or her favorite tool. As developers interact in the course of working on particular pieces of software, they must learn about tools they may never have used before.

Now they must consider this knowledge beyond tomorrow’s workload. They must see a bigger picture for themselves and the industry. We know that when developers start to demand changes, their voices are heard and actions taken. Versioning can get infinitely better than it is. And you can make it happen. It’s your move developers.

The post The Seven Deadly Sins of Versioning (Part 3): Versions in the Code appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/the-seven-deadly-sins-of-versioning-part-3-versions-in-the-code/feed/ 0
10 Reasons You Don’t Need Continuous Updates https://liquidsoftware.com/blog/10-reasons-you-dont-need-continuous-updates/ https://liquidsoftware.com/blog/10-reasons-you-dont-need-continuous-updates/#respond Wed, 19 Dec 2018 12:31:53 +0000 https://liquidsoftware.com/?p=1240 1. You’re always and forever happy with your latest release If your last release was pretty stable, you fixed a bunch of bugs, you have quite a few happy users, well, maybe it’s best to let sleeping dogs lie. Why update something good? You have at least 6 months before you need to release the next …

10 Reasons You Don’t Need Continuous Updates Read More »

The post 10 Reasons You Don’t Need Continuous Updates appeared first on Liquidsoftware.

]]>

1. You’re always and forever happy with your latest release

If your last release was pretty stable, you fixed a bunch of bugs, you have quite a few happy users, well, maybe it’s best to let sleeping dogs lie. Why update something good? You have at least 6 months before you need to release the next version, if at all. Maybe, you’ll never have to update it, after all, your code is perfect, and people are always pleased with what they have.

2. Forget microservices. You love the ceremony around big releases.

You love the smell of integration hell in the morning. You just can’t get enough of modules not communicating, debugging APIs and negotiating with other teams.

3. Zero downtime? C’mon. Anyone can find some time to shut down to upgrade their systems.

Whatever it is they’re doing, your customers should always be able to take your systems down for a bit to install the latest upgrade.

4. DevOps is all hype

There’s no way it’ll ever stick. Where are the good old days of silos where I could just throw the software which works on my machine over the wall to Operations, because it’s their problem now?

5. Backwards compatibility is a thing.

Versions never conflict, and backwards compatibility never gets broken, so what can possibly go wrong with SemVer?.

6. You trust people more than you do machines.

Who needs automation? Once you have an engineer that gets the job done well, why not let him do it over and over again. He’ll just keep getting better at it.

7. IoT will never take off.

“Things” will never be as smart as computers. My refrigerator will never talk to the supermarket and order things by itself. Let’s keep appliances dumb.

8. Security vulnerabilities are all a conspiracy theory.

Don’t believe all the FUD being spread about  the Struts 2 Equifax vulnerability like or Meltdown and Spectre. Those are just rumors spread by people and companies who are trying to push us into endless cycles to unnecessarily update our software all the time.

9. Who needs promotion pyramids.

Your developers work real hard to write great quality software. Why not just deploy development builds to production?

10. Rules were made to be broken, so were REST APIs

If you have an enhancement to a REST API, why can’t you just change it? Don’t your users want the latest and greatest? Doesn’t the saying go, “Move fast and break things”?

If you believe any of that…

… maybe you should read Liquid Software.

The post 10 Reasons You Don’t Need Continuous Updates appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/10-reasons-you-dont-need-continuous-updates/feed/ 0
The 7 Deadly Sins of Versioning (Part 2) https://liquidsoftware.com/blog/the-7-deadly-sins-of-versioning-part-2/ https://liquidsoftware.com/blog/the-7-deadly-sins-of-versioning-part-2/#respond Wed, 28 Nov 2018 09:25:07 +0000 https://liquidsoftware.com/?p=1229 In the first of this series of blog posts, we talked about the problems with SemVer. In this post, we move on to Hash Versioning. Hash Versioning We define hash versioning when the creation of a version is partly based on the hash code of a set of data (typically, the source files). Hashing in …

The 7 Deadly Sins of Versioning (Part 2) Read More »

The post The 7 Deadly Sins of Versioning (Part 2) appeared first on Liquidsoftware.

]]>
In the first of this series of blog posts, we talked about the problems with SemVer. In this post, we move on to Hash Versioning.

Hash Versioning

We define hash versioning when the creation of a version is partly based on the hash code of a set of data (typically, the source files).

Hashing in Git

Git is based on and provides a hash SHA1 checksum for every commit effected on a repository. The hash number is a unique stamp, which represents both the state of the code within Git, as well as how many merges were accomplished. When branches are created out of a given commit hash and no new commit is added, the hash will remain the same. Thus, the git hash is not unique in a repository and does not represent a specific branch. Additionally, when carrying out merges, a new checksum is generated even if the final state of the source code is identical to the original branch. This can be avoided through the correct use of fast forward and rebase, but very few individuals possess a level of mastery necessary to manage Git with such perfection.

Why is hashing used?

Non-pre-release SemVer is structured sequentially in a major.minor.patch number format. It presumes, for example, that version A.B.C+1 is necessarily a newer (maybe better) version than A.B.C., and similarly, A.B+1.C is a more featured or advanced version than A.B.C., etc. SemVer also allows for representations of non-release versions as a pre-release part appendix to the version number (e.g., A.B.C-buildnumber). This can pose a significant problem in the age of continuous integration and continuous delivery (CI/CD). Not only might there be thousands of interim builds between releases, but development today is not sequential – it’s conducted in parallel, by different teams, working on different branches.

In a continuous environment, SemVer is inadequate because a larger version number cannot be assumed to represent the build containing a new and appropriately-tested feature. When parallel builds are running and/or parallel branches are continuously being built, hash versioning is preferred, as every parallel stream can automatically generate its own hash number; there’s no need to use a centralized counter to generate a sequential semantic version number.

So, how is hash versioning a deadly sin?

It isn’t, but as the best practice is to use both SemVer and hash, the sin is the way hash versioning is improperly generated and used. The biggest problem is when SemVer isn’t used at all.

It’s far better to replace the pre-release build number in the SemVer layout with the hash. This way one gets the best of both worlds through the production of human and machine-readable versions. The human part evolves much slower, which allows it to be managed by humans. Meanwhile, the machine part is produced rapidly fast and in a parallelized fashion.

Using a hash instead of a build number in a pre-release appendix allows for the best of both worlds – retaining the good aspect of SemVer (i.e., having a human-readable version) and the benefit derived from the use of machines/automation (i.e., hashing).

By way of example, in our 2018 book, Liquid Software: How to Achieve Trusted Continuous Updates in the DevOps World, we discuss the application of liquid software in the automotive industry. To extend that discussion, let’s consider the manufacture of a specific car model intended for sale in a particular country. Source code (representing the car design) is built and packed into a binary (the car), which now has an identifier (i.e., the hash of the source code). The hash identifies all facets of production that require testing and validation. However, the developer neglects to consider other variables that should go into that hash, which have no impact on the functionality, performance, quality, or safety of that automobile, such as color. This results in the creation of different binaries (different cars with separate colors) that are all using the same hash number.

In terms of sensible software development with highly practical applications, color can represent a different packing algorithm, a different pre-configuration default, and a special deployment setting. It can be one of several parameters that impact package content, but not testing and other validation processes to which the software is subjected. The lack of unique hash numbers to denote color variations generates unnecessary expenditures of time and money. This is because human beings must be involved in order fulfillments to assure that auto dealerships receive the vehicles they need in the colors desired by their customers. The objective is to avoid repetitive, manual installation and configuration of software.

Absolution from sin

Hashing is at the heart of the Git architecture. The hash of the source code is automatically generated and ready to use in a version of any given binary file that’s built from a given git state. The color dilemma arises when developers only use the git hash as part of a file’s version number. The solution to our car color conundrum is to either add the color variable into the hashing function, or to add the color variable to the source code.

If our theoretical auto-manufacturing developer doesn’t include color as part of the source code, Git cannot know to create a specific hash to address that variation. However, if color was added to the code base, it would trigger unnecessary testing and validation processes, as this build variable is irrelevant to product operability or quality.

A liquid software pipeline can solve the “problem” of such inconsequential automobile variables needing to influence the hash, but not be a part of the source code. After the first part of the pipeline has handled manufacturing, testing, and validation information, without color information, the pipeline then screens for additional parameters and creates specific binaries for them. These are used to calculate appropriate hash numbers to address such variations. For liquid software, this is very important because versioning should always be handled by machines and hash versioning is the perfect type of machine version.

Even still, there remain questions of order and prioritization to be resolved. Even with the best hashing system in place, the hash itself won’t make clear which version is preferable over another. For example, with hashes constantly being generated when problems are detected and fixed, how can the developer know that a new hash is actually the one that’s best to use? How can hash versions be sorted for quality? We might presume that the answer lies in chronology (i.e., the latest version is better than a prior version). However, hashes don’t contain chronological information.

Liquid software solves this problem through the deployment of a metadata server that’s part of a good CI/CD platform and version control system (VCS). The server is programmed with a set of filters and queries meant to identify desired, traceable parameters for a given situation. Similarly, as it’s very difficult to recreate a build and its environment from a hash version, having an automatic way to link between a hash version and the software offers an efficient way to debug a service that has already been deployed to production.

Trains and platforms

When it comes to modern software development, SemVer forces everyone to work within the context of the same sequential train, which can be, well…a train wreck. In a continuous pipeline flow, it’s a senseless constraint whose usefulness is limiting in scope and creates bottlenecks in development processes. SemVer is a tool designed for human readability, which works against our need to let machines do the work of versioning.

With an exponential rise in binary creation and a need for speed in the creation and deployment of updates, automation is served through Git, which is a tremendous advancement. Liquid software affords us with a next mile advantage. It builds on Git’s success by further refining development, deployment, analysis, and prioritization steps through continuous processes that assure the best hashes are always being used.

Upcoming posts in our Deadly Sins of Versioning series will address multiple packages for the same version and versioning in the code.

The post The 7 Deadly Sins of Versioning (Part 2) appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/the-7-deadly-sins-of-versioning-part-2/feed/ 0
How Liquid Are You? https://liquidsoftware.com/blog/how-liquid-are-you/ https://liquidsoftware.com/blog/how-liquid-are-you/#respond Wed, 24 Oct 2018 12:00:21 +0000 https://liquidsoftware.com/?p=1202 The Wicked Witch of the West had it wrong. In the land of Oz, she feared getting wet and melting away, but for software development in our world, becoming liquid is exactly what you should be seeking to do. The liquid software revolution is already underway, advancing steadily toward reality as DevOps takes firmer hold …

How Liquid Are You? Read More »

The post How Liquid Are You? appeared first on Liquidsoftware.

]]>
The Wicked Witch of the West had it wrong. In the land of Oz, she feared getting wet and melting away, but for software development in our world, becoming liquid is exactly what you should be seeking to do.

The liquid software revolution is already underway, advancing steadily toward reality as DevOps takes firmer hold in enterprises. Development and deployments are moving away from fixed, versioned releases, toward streams of verified software components. Ultimately, this can feed a steady river of trusted continuous updates that flow reliably to computing environments and devices.

So, how liquid are you now? Will your development pipeline be able to easily join the flow, or will you drown in the coming flood?

It’s not just about being ready for the future, either. Even in its nascent forms today, liquifying the software development process produces huge benefits in faster and more frequent releases at lower cost.

This approach emphasizes production of small, functional components instead of big application packages, for more complex, machine-meaningful versioning. It’s enabled by greater use of automation, to produce a seamless and fluid flow of continuous updates.

Truly liquid software has yet to exist, but it’s on its way. The requirements are being driven in part by rich technologies like IoT where thousands of devices need to be kept reliably and safely current, or perform secure rollbacks at high scale when flaws are found.

Becoming liquid ready

How ready are your software development processes to join the currents? How can you help melt the barriers and help your costs decline, and increase productivity and profits?

Here are the essential things you needd to have in place to grow toward liquefaction:

Commit to DevOps

The liquid software revolution aims to produce continuous updates, the next evolutionary outcome in DevOps. So you’ll need to have strong DevOps systems and practices in place now.

Today, DevOps software development practices support continuous delivery (CD), in which code changes are automatically built, tested, and prepared for a release to production. This is made possible by continuous integration (CI) practices, which feed all code changes through testing and  production environments after they’re built.

The cycle of CI/CD reduces time to fix bugs and accelerates delivery of features – the essential groundwork for a system of continuous updates.

Where CD produces frequent updates, bringing sets of fixes and features in a new version, continuous update accelerates this with miniaturized updates. Each update may have fewer changes, but occur more frequently, in a system of small, but continuous improvements.

Having the right things in place means investing in the infrastructure that will support it, from your source repositories, to the CI servers, and the ability to manage the large number of binaries that these procedures will produce. Just as important, and at least as challenging, is growing the team culture that makes DevOps thrive.

Move to Microservices

Rivers are made of tiny drops, so liquefying means shifting development strategies to small, focused pieces of software that do one thing well.

In traditional software development, an entire application is tied to a single codebase. But as software has moved to running online, that’s proved slow and costly, and burdensome to scale.

Instead, more development processes demand breaking code into independent microservices that run as separate processes. Applications become a collective, orchestrated effort across them, as output from one independent service is used as an input to another.

It’s an early step to liquifying your software, as specific functions can be readily updated and scaled at low cost. Code becomes more resilient as well, even as the cadence of updates accelerates.

Using container technology like Docker is a good step to shifting to microservices, and building in smaller components that layer into powerful applications. Multiple containers can run in isolation from each other, even as they share the same kernel. And you’ll need the tools that can maintain a registry of those images for deployment through an orchestration tool like Kubernetes.

Establish a Promotion Pyramid

Your CI/CD pipeline will generate a very large number of binaries as code moves from development, test, staging and production, a challenge that will only grow as you liquify toward continuous updates.

Having a clear promotion pyramid in your pipeline will help you to manage that volume. It will bring each release through a uniform life-cycle of staging for validation and QA before it gains a final promotion to where it can found and used. This essential process will distill the rush of binaries down to deploying only those with the lowest risk.

Establishing this clear hierarchy will enable the trusted continuous updates that being truly liquid can deliver.

Achieving this will require tools that can manage the growing quantity of binaries produced, recognize them as units of builds, and control their visibility as they are promoted through their stages.

Informate to Automate

Keeping updates truly continuous means eliminating as many needs for human intervention as you can. Every time a person needs to approve or review something is like damming your updates stream, slowing flow and flooding backward.

The more your procedures for validation, promotion, and deployment of software through the CI/CD pipeline are automated, the stronger and steadier your current of updates will be.

What enables this is metadata, information that lets our automated tools make sensible decisions about whether or not a piece of software and all its component pieces are sound. That includes metadata about the origin of a component, its history, and the results of validation steps.

This information helps choose whether your software gets promoted through its pipeline stages toward release and deployment. It can include internal metadata generated about your own projects and dependencies, and external metadata about other parts (such as open-source components) you use.

That requires the tools that can manage that metadata, which can come from a variety of other tools and sources. It will need to integrate smoothly with your promotion and deployment automation, to help your liquid system cascade smoothly outward.

 

Liquifying your software development processes pays off in the form of a faster, more resilient, and more reliable flow of updates at increasing scale. When you find yourself shouting “I’m melting!” it won’t be a cry of anguish, but one of triumph.

The post How Liquid Are You? appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/how-liquid-are-you/feed/ 0
The Seven Deadly Sins of Versioning – Part 1 https://liquidsoftware.com/blog/the-seven-deadly-sins-of-versioning-part-1/ https://liquidsoftware.com/blog/the-seven-deadly-sins-of-versioning-part-1/#respond Wed, 10 Oct 2018 09:36:56 +0000 https://liquidsoftware.com/?p=1166 SemVer Patch Number Discussing (and defining) versioning is never simple.  It’s a complex issue, not least because of the large number of developers who have been involved in creating solutions for a wide array of environments.    Update package managers have often been created to address the needs of specific systems – Linux (APT and …

The Seven Deadly Sins of Versioning – Part 1 Read More »

The post The Seven Deadly Sins of Versioning – Part 1 appeared first on Liquidsoftware.

]]>
SemVer Patch Number

Discussing (and defining) versioning is never simple.  It’s a complex issue, not least because of the large number of developers who have been involved in creating solutions for a wide array of environments.   

Update package managers have often been created to address the needs of specific systems – Linux (APT and RPM), Python, Maven, Ruby, NPM, Nuget, and so on. It becomes confusing, time consuming, and costly for developers who are collaborating with one another but working on different platforms, as they must adapt to the versioning constraints of each package.

Sequence-based identification arose as a means of addressing this multiplicity of versioning activities, and semantic versioning (SemVer) is currently the most successful of these schemes. In its current iteration, Semantic Versioning 2.0 is helping the software industry to focus on how versioning should be done. SemVer establishes strong, yet streamlined constraints through the use of a three number versioning system (Major.Minor.Patch). It’s not a perfect solution, particularly as it’s proven itself to be flexible enough for people to create their own SemVer adaptations.

Also, although the rules demand that only those three numbers can be used, one can add a “prerelease tag.” For example, one can add information in the development of a Major.Minor.Patch before it is released to the world.  We might have Version 2.1.2-milestone61, where “milestone61” is communicating a step in the process of creating a specific semantic version. However, it is forbidden to have a micro-patch on a patch. If we want to execute a patch on v2.1.2, the patch must be v2.1.3.

These strong constraints force those things that are labelled as patches to be real patches. If a patch number adds too many changes, it should raise the concern that if a specific issue on v2.1.2 needs to be fixed – and there is already a v2.1.3 and a v2.1.4, with a quite large amount of code inside those versions – then there will be no way to release a micro-patch for v2.1.2. The contract must be adhered to. To fix the issue on the v2.1 stream, it must be addressed in the appropriate patch sequence and will be labelled as v2.1.5.

It’s a best practice to have strong versioning constraints. It makes the lives of all developers easier, as there’s a whole lot less to decipher. Nevertheless, many companies and many teams have a hard time staying within these strict confines. It’s not rare for developers to use prerelease tags in post-release situations, or to issue micro-patches on top of micro-patches. As a result, we still see very low levels of trust in updates of patch versions.

This is overcome when liquid software updates are flowing, and systems are being updated automatically with the latest version of a given library or piece of software. If a developer has issued the v2.1 stream of their software or library, then every time a patch is ready for release, all users who need and are capable of accepting the upgrade will automatically receive it. And patches will only be released once they’ve been fully tested and verified as proper patches that are fixing bugs, security issues, the behavior of a piece of code, and the like. They cannot contain any database, schema, or strong API changes, nor can they impact the way a system is actually responding. Proper patches should always be capable of being added into running systems, at any time, with zero downtime.

The First Deadly Sin

So the First Deadly Sin of Versioning is committed when one is doing something that looks like SemVer, but in reality is adding a lot of features, a lot of code, and actually changing the behavior of a given piece of software on a patch release. If used precisely as intended, SemVer has the basic requirements to deliver a continuous flow of automatic updates, and we’re certainly seeing more and more evidence of this in the software industry’s execution of continuous integration and continuous deployment.

To keep one’s software soul pure, one must be certain that patches are really patches. And one way to truly know that a patch is a patch is to establish an environment in which such releases are run-of-the-mill. A patch is a fix, it’s an adjustment. It doesn’t require any publicity or ballyhooing because it’s not adding anything. Rather than altering or enhancing what a piece of software is supposed to do, it’s merely assuring that the software continues to do it. Liquid software assumes that the continuous delivery of patches will always be part of the normal flow of any continuous update system. And most importantly, patches should be done by machines.  Maintaining consistency in the standardizations that SemVer imposes can make the work of machines quite simple when carrying out comparisons and analyses between versions, and generating sensible versioning of updates.

Upcoming posts in our Deadly Sins of Versioning series will further explore the uses and misuses of SemVer, hash versioning, multiple packages for the same version, and versioning in the code.

The post The Seven Deadly Sins of Versioning – Part 1 appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/the-seven-deadly-sins-of-versioning-part-1/feed/ 0
No-Feature Releases – The Fun Way to Reduce Technical Debt https://liquidsoftware.com/blog/no-feature-releases-the-fun-way-to-reduce-technical-debt/ https://liquidsoftware.com/blog/no-feature-releases-the-fun-way-to-reduce-technical-debt/#respond Wed, 26 Sep 2018 09:27:58 +0000 https://liquidsoftware.com/?p=1158 As executives and managers, we know that technical debt is the worst inhibitor to engineering productivity. “Short cuts make long delays,” a wise hobbit once said in Tolkien’s epic, and developers’ pragmatic concessions to meeting the deadlines we set make that debt grow. So how can we create a debt-free engineering culture? To start, recognize …

No-Feature Releases – The Fun Way to Reduce Technical Debt Read More »

The post No-Feature Releases – The Fun Way to Reduce Technical Debt appeared first on Liquidsoftware.

]]>
As executives and managers, we know that technical debt is the worst inhibitor to engineering productivity. “Short cuts make long delays,” a wise hobbit once said in Tolkien’s epic, and developers’ pragmatic concessions to meeting the deadlines we set make that debt grow.

So how can we create a debt-free engineering culture?

To start, recognize what’s causing the debt. The most likely cause? Features.

Every software release is an opportunity to delight and engage the customer with new features or a new look. And we get rewarded for it when our products gain buzz in the press and in the marketplace.

But the drive to ship new features often forces developers to choose technical solutions that are easy and quick over approaches that are more flexible but require more time. Ship it now, fix it later — and later never comes, even as new obstacles emerge from the code’s shortcomings.

Technical debt is never due to having too few engineers. You can hire all the engineers you want, but if you don’t have the discipline to live below your means, the debt will continue to grow.

After a major release, ask yourself “Did I pay down the debt? Or did I just refinance?”

Paying Debt Down

Like any debt, the technical kind has interest costs that compound with time. The longer you wait, the harder it gets to implement changes later on. Unaddressed bugs accelerate entropy in your software, limiting the return on your engineering investment.

There’s a spiritual debt as well that risks making your developer teams discouraged when they feel unable to send their best, most bulletproof work into the world.

A practical, simple, and fun way to pay down your technical debt is to launch an initiative for a “No-Features Release.” Here’s how it works:

  1. There are no new features. Not a single one. No exceptions.
  2. Prioritize debt payments as a team. Focus on solving the biggest pain points first
  3. The whole team works together to pay down technical debt.
  4. Celebrate loudly, to yourselves and to the public, a release with no features.

Is this possible? I’ve done it at Microsoft and Google, and Apple did something much like it for their release of iOS 12.

It might seem challenging to get everyone on board, but once you do your engineers will turn it into a rallying cry that boosts morale.

You’ll make your customers very happy too, as they’ll be pleased by a more reliable and forward-looking product.

What are some of the of things you might focus on?:

  • Security – Analyze your entire attack surface to better understand your risks. Put a Red Team Infrastructure in place to identify vulnerabilities. Obscure your attack surface and close any gaps.
  • Performance – Improve page load times, API response times, memory consumption, startup/shut down times, and load testing.
  • Observability – Improve logging infrastructure, error messages/warnings, and tracing. levels
  • Plain Old Bugs – Go for quantity! Get to the P3’s and P4’s and you know you’re doing it right.
  • Test infrastructure – Implement new unit tests, automated headless installation, feature tests, test reliability, code coverage, chaos monkey, fault injection, and load/stress testing.
  • Documentation – Review all documentation starting with oldest first, and update or delete your docs as needed.
  • Dead Code – Keep your code lean, and delete anything that isn’t still doing work.

When You’re Done

When we complete a new release, we usually like to tell our customers about the new things it can do. But once you’ve completed a no-features release, what do you have to crow about?

A lot. Tell the world you did a no-feature release because you care about quality. Boast how customers gain a cleaner, better performing product that can continue to grow with them as the technology evolves.

Just as important, share the story of your engineering culture with the world. Provide clear metrics of real improvements, such as how many bugs were found and fixed. Explain how the new improved infrastructure makes the team stronger.

But just fixing some bugs doesn’t change your culture. Following a no-feature release, use what you learned to incorporate debt reduction with every release.

Your engineering teams will be grateful, your customers will benefit, and you won’t keep making those technical cost payments anymore that drag productivity down.

Those are the kinds of features you really need.

The post No-Feature Releases – The Fun Way to Reduce Technical Debt appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/no-feature-releases-the-fun-way-to-reduce-technical-debt/feed/ 0
Defeating Zero-Day Attacks https://liquidsoftware.com/blog/defeating-zero-day-attacks/ https://liquidsoftware.com/blog/defeating-zero-day-attacks/#respond Mon, 17 Sep 2018 12:19:17 +0000 https://liquidsoftware.com/?p=1149 Zero-day security threats constitute a critical imperative driving software development towards liquid software (i.e., continuous updates). First, let’s be clear about terminology. When we hear about zero-day vulnerabilities, zero-day exploits, and zero-day attacks, the zero is referring to the number of days that will elapse between the moment a software developer learns of a given …

Defeating Zero-Day Attacks Read More »

The post Defeating Zero-Day Attacks appeared first on Liquidsoftware.

]]>
Zero-day security threats constitute a critical imperative driving software development towards liquid software (i.e., continuous updates).

First, let’s be clear about terminology. When we hear about zero-day vulnerabilities, zero-day exploits, and zero-day attacks, the zero is referring to the number of days that will elapse between the moment a software developer learns of a given vulnerability, exploit, or attack, and the moment that problem is disclosed to the public. So, a 30-day vulnerability will have been reported to a software developer thirty-days before being announced publicly. That’s a month-long window within which this type of issue can be mitigated.

It’s a Black and White Situation

Zero-days also raise ethical questions related to cybersecurity and the distinctions we make between Black Hat and White Hat hackers. The former are typically evildoers out for personal gain, while the latter are on a mission to find vulnerabilities for companies to patch before they can be exploited. Typically, we want our vulnerability researchers to be White Hats – professionals who use their talents for the good and honorable purpose of identifying vulnerabilities and offering solutions for them and the time necessary to fix them.

A vulnerability window is the time between when a vulnerability becomes known and is exploitable and the moment it is patched. It’s a given that such windows will, more often than not, be greater than zero, here’s why: White Hats will alert companies about vulnerabilities and wait for a fix to be released before publishing information about what they have found. These are non-zero day vulnerabilities. Black Hats begin exploiting vulnerabilities as soon they discover them. These are zero-day vulnerabilities.

Once a vendor is in the know about a vulnerability, their objective is to release patches or updates quickly, working to keep the vulnerability window to as limited a period of time as possible. Naturally, vendors always prefer to resolve vulnerability problems before information about them becomes public knowledge (i.e., to have a zero or negative vulnerability window).

But here’s the zero-day dilemma: There’s a huge marketplace for zero-day detection and reporting. Generally, buyers don’t have any of our best interests at heart and are willing to pay significantly more for zero-day information than those who are looking out to protect our systems. As you might imagine, then, most vulnerabilities are identified by Black Hats who are highly motivated to find and rapidly exploit the weaknesses they uncover. From there, it’s a race against time. The faster a software vendor can issue a patch, the less damage will be sustained during an attack. It’s a reactive, rather than proactive strategy. And obviously, every malicious cyberattack is zero-day because, if it’s being done for malignant purposes, the attacker has no interest in notifying the vendor.

It Gets Worse

Liquid Software - Defeating Zero Day Attacks

Meltdown and Spectre are two critical vulnerabilities that were found at the start of 2018 to exist in most modern CPU architectures, including Intel, AMD, IBM POWER, as well as some ARM-based microprocessors. The latter is a zero-day enabler and is believed to have contributed to a significant spike in zero-day attacks.

Meltdown focuses its attention on breaking the insulation between user applications and the operating systems on which they run, by allowing a rogue process unauthorized access to all memory and then exploiting some optimizations in the processor threads. Bad as this is, however, protection against Meltdown can be achieved by supplying more context to running threads and making sure that this context is identifiable and used to provide additional isolation. This process – known as collaring or marking the threads – permits us to determine where a data request is originating, thus making it easier to know whether a particular request is trying to get to a place where it’s not allowed to be.

At the beginning of this year, there was much more chatter about Meltdown than there was about Spectre, because Meltdown represented a clear and present danger. It was practically a classroom example of how, in Javascript, one can hop from application memory straight into operating system memory and then mine all manner of personal data, including passwords. Scary stuff! But thanks to intrepid researchers, processor manufacturers have since released fixes for all of their operating systems, and threat of Meltdown has been mitigated.

Spectre is different. Early on, it received less attention than Meltdown because it’s harder to exploit. Nevertheless, it’s a pernicious vulnerability that uses a processor performance optimization known as speculative execution. This means that the processor will try to guess how software should operate at certain junctions of execution logic.

This can be exploited if an attacker can examine the logic embedded within a given piece of software code and determine which of a set of branches a processor will speculatively execute most of the time without checking other options. Once this has been accomplished, an attacker can structure a request for a particular behavior in the same way the processor executes IF logic. From this knowledge, an attacker can guess how the process will work and then make assumptions about certain states of memory. If those assumptions are correct, the attacker can infer what is in the memory and where it resides, and thereby gain unauthorized access to private memory sectors of otherwise secure information. By definition, Spectre can be used as a zero-day exploit. There is no way to write software in such a way that it will be protected from this type of speculative execution attack.

The only true protection against this would be to remove any possibility for speculative execution within processors. That’s not going to happen, however, because it would set back processor performance advancements by at least a decade. Processor vendors have been thinking about ways to make speculative execution a little less predictive – which might make it harder for attackers to make assumptions about behavior – but that’s hardly a solution to Spectre and the class of vulnerabilities it represents.

Beat ’em to the Punch

So, what’s the answer? Speed.

As zero-day vulnerabilities increase, and the knowledge of them spreads across the internet and the Dark Web, there will only be more bad actors ready to exploit our systems. Hackers are always going to figure out how to worm their way into our systems. It’s what they do. It’s what they’ll always do. It’s how they get rich.

Liquid software (continuous updates) is all about accepting the fact that there is no protection. There’s only reaction. There’s no faster way to react than through liquid software. Even so, is it possible that liquid, continuous updates will fail? Of course. And when that happens, a rapid update will be required to fix what’s been screwed up. Regardless of circumstance, the faster the response, the better and more protected our software will be.

The post Defeating Zero-Day Attacks appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/defeating-zero-day-attacks/feed/ 0
People Don’t Resist Change. They Resist Being Changed. https://liquidsoftware.com/blog/people-dont-resist-change-they-resist-being-changed/ https://liquidsoftware.com/blog/people-dont-resist-change-they-resist-being-changed/#respond Wed, 15 Aug 2018 09:24:35 +0000 https://liquidsoftware.com/?p=1125 An American systems scientist, Peter Senge, said that. He could have been talking about developers who think liquid software (i.e., continuous updates) is still conceptual, something that’s not ready for prime time. They may say, “Oh, that would be really swell, but I don’t think it’s possible.” The thing is, though, it’s already being done. …

People Don’t Resist Change. They Resist Being Changed. Read More »

The post People Don’t Resist Change. They Resist Being Changed. appeared first on Liquidsoftware.

]]>
An American systems scientist, Peter Senge, said that.

He could have been talking about developers who think liquid software (i.e., continuous updates) is still conceptual, something that’s not ready for prime time. They may say, “Oh, that would be really swell, but I don’t think it’s possible.” The thing is, though, it’s already being done. It’s not future think, it’s now!

But I get it. Software development isn’t easy. Creating and sustaining a product takes lots of dedication and focus. When systems are already in place, it can be daunting to consider supplanting those with new ways of developing, fixing, and improving software. Lots of developers don’t feel they have the time or energy to learn about what’s out there…somewhere…now.

Still, sooner than you might imagine, liquid software will become the norm. It’s inevitable – for a whole host of reasons that are less important than self-interest, self-preservation, and good old bottom line money-making.

Why not gain, without pain?

Is it not true that almost every time the software industry goes through changes in processes, constraints, and the essential ways we work, it’s been a frustrating struggle? Of course it is. Why? Because there are rarely templates, best practices, or well-established techniques that are part of the average developer’s toolkit.

What if I told you that a significant objective of the liquid software revolution that’s already begun is to solve this problem? What if I said that liquid software is introducing automated systems and standardizations into the development environment that will result in enormous efficiencies designed to eliminate these burdens?

Have I caught your attention?

Wouldn’t it be better if boring, repetitive, and thankless tasks were handled securely and reliably by machines? And wouldn’t that translate into more work hours and brain power that developers could dedicate toward the generation of greater innovations? And wouldn’t that give companies making the transition to liquid software a competitive advantage?

Would any end-users be opposed to receiving seamless, transparent updates with zero downtime? Would enterprise users not like bugs eliminated and new features delivered as quickly as possible? Wouldn’t all of this be a selling point? Wouldn’t that mean more bucks in the bank for those who take the first steps toward liquid software now, instead of running like crazy to play catch-up later?

It’s a fact: Machine control improves software development

Still not convinced?

Then let’s talk about continuous integration (CI) and a CI server like Jenkins. Before Jenkins, there were all manner of end processes that had to be manually executed. Companies used to set aside one to two months for entire teams to do nothing but pre-release integration. Then, new systems were created and, very rapidly, CI was in common use. Those who made the switch know from direct experience that the new way is better. The ones playing catch-up today are those who didn’t think they had the time or energy to devote to learning about and transitioning to CI and continuous deployment (CD). They are now paying a price for lagging behind.

Today, any software firm worth its salt has machines that are already controlling numerous activities and offering solid protection for and reliable feedback from the systems to which they are attached in a real CI/CD environment. Does this describe the circumstances in your software development shop? If it does, when did you get on the CI/CD train? Were you an early adopter or did you hold back? If you held back, what were your reasons? Do you acknowledge that you’re better off with CI/CD than without?

If you came late to CI/CD, were your reasons for not adopting those systems and practices the same reasons you have now for not pursuing liquid software?

We’ve made a great leap forward with CI/CD and we’re experiencing the tangible benefits of that leap. Why wouldn’t the same hold true for liquid software? Major industry players such as Google, Netflix, and Amazon are already profiting from the benefits of moving to liquid software, like the capability for continuous updates.   

It doesn’t need to be a great leap for mankind, just start with that first step

If you’re intrigued, let me encourage you further: Achieving continuous updates is not as difficult as you might think. Accept that it’s possible, not just for big firms, but for software development organizations of any size. Take a few, small steps forward (just as you once may have done with CI/CD) to identify the specific impacts that liquid software will have on your software processes, software environment, framework, code, tests, and so on. Your objective, of course, is to produce positive outcomes. So, start there. Don’t ask, “How do I implement liquid software in my company?” or “What will it cost?” Ask, “How will liquid software specifically help my firm to achieve greater productivity, security, service, and creativity?”

Then go to the next level. Explore continuous updating for your stateless services. Look at how you could continuously update your REST API, and how you could apply continuous updates to your data, data access, and data persistence layers. For each, add the constraint of zero downtime and a high availability environment.

Next, consider the things you’re doing now that make it impossible for you to have continuous updates in place. Look at what you’re doing when updating persistence or data layers. Probably you’re needing to shut everything down, perhaps on a quarterly basis. Wouldn’t you prefer to execute these updates continuously and without downtime?

If the answers you get inspire you to go further, remember that getting from where you are now to a liquid software future is a process that will take place in parallel with existing processes. This is typically done on the side, modifying your infrastructure, frameworks, and deployment model, as well as the final runtime and runtime environment. But nothing stops while the transition is occurring.

And a final thought: Leapfrogging. The cell phone is a pre-eminent example of this phenomena. In many developing countries, average citizens never went through the phase of wired telephony and then the transition to mobile. They jumped (or leapfrogged) over the earlier technology to the modern one. In software development, we’re seeing this with greenfield projects that are using state-of-the-art DevOps technology and design for what are called cloud-native applications. Equally, those who can afford to do so, such as big banks and large industries, are creating side projects (essentially just small, startup enterprises) to address specific issues. These are not adopting old practices, even if those practices are still in use at the parent firms. They’re leapfrogging into new technologies, such as liquid software.

Maybe it’s time to ask your software development innovator if continuous updates are right for you!

 

The post People Don’t Resist Change. They Resist Being Changed. appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/people-dont-resist-change-they-resist-being-changed/feed/ 0
When Vision Becomes Reality https://liquidsoftware.com/blog/when-vision-becomes-reality/ https://liquidsoftware.com/blog/when-vision-becomes-reality/#respond Mon, 30 Apr 2018 08:40:04 +0000 https://liquidsoftware.com/?p=504 The Birth of a Vision We all like to engage with the people who use our products; whether we’re at customer sites, figuring out solutions for complex cases of advanced usage of our products, or meeting developers at the many conferences we visit and speak at. It’s through this deep level of engagement that we …

When Vision Becomes Reality Read More »

The post When Vision Becomes Reality appeared first on Liquidsoftware.

]]>
The Birth of a Vision

We all like to engage with the people who use our products; whether we’re at customer sites, figuring out solutions for complex cases of advanced usage of our products, or meeting developers at the many conferences we visit and speak at. It’s through this deep level of engagement that we started to see patterns in the pains that our users encounter.

One day, two years ago, as we were deep in discussion about one of the companies we visited, Fred said, “Software should be liquid.” And the penny dropped.

We had all witnessed and taken an active role in the DevOps revolution. We had seen software development evolve from running discrete periodic builds to continuous integration to continuous deployment, and realized that the next logical step was… continuous updates.

The Vision Takes Form

In the years since that first lightbulb moment, we have seen how badly the software industry needs this revolution. From simple feature updates to bug fixes to security patches through to all out, widespread, global malware attacks, it is clear that all of these scenarios show the need for continuous updates. And the industry has reacted. Some of the Googles and Netflixes of the world have already orchestrated their infrastructures to implement continuous updates. But those are proprietary solutions and that’s not our vision. So, we gathered our ideas and thoughts, our expertise and experience, and are trying push the envelope to make Continuous Updates and Liquid Software a commodity.

The Vision is Now

Some have already taken the first bold steps into Liquid Software, and we are all benefiting from continuous updates that these companies provide. But like all revolutions, there is initially some resistance.  There’s a conception that moving to continuous updates is a costly and lengthy process, and the industry is slow to adopt. But there’s no reason to fear this change. The ensuing benefits will far outweigh the costs of implementing continuous updates. This is why we wrote “Liquid Software.”

This book is the embodiment of our thoughts and ideas on how any organization developing software can and should achieve continuous updates. It’s the culmination of a long process of brainstorming, discussing, finessing and tweaking every last word and illustration printed on those 193 pages. If you ever wished the machines could work for us instead of the other way around, this book is for you. We hope you enjoy it.

Fred Simon
Yoav Landman
Baruch Sadogursky

The post When Vision Becomes Reality appeared first on Liquidsoftware.

]]>
https://liquidsoftware.com/blog/when-vision-becomes-reality/feed/ 0