Monday, December 15, 2014

Top 3 Improvements New Agile Teams Can Make

At first, I was planning to write about the top mistakes that novice Scrum/agile teams make. But then I wanted to say it in a positive way. So, I ended up writing about the top three improvements new Scrum/agile teams can make. Here it goes.

Focus on Stories, Not Tasks

Focus on stories (or features), and not tasks. Yes, team members still need to pull tasks from the board, and perform them. But don't forget that the team's goal is to complete stories. This means that when a team member has an option to pull a task from the board, the task should be part of the current on-going story. It would be better if team members are discouraged from starting a task that is not part of the current on-going story.

I've seen the following board (see below) several times in my few years of agile. Each story is grouped as a row (left to right) on the board. Notice how the team has completed several tasks, but none of the stories are done.

Scrum Board with Just Getting Random Tasks Done
To DoIn ProgressDone
Task A8
Task A6
Task A3
Task A5
Task A1
Task A2
Task A4
Task A7
Task B8
Task B6
Task B3
Task B7
Task B1
Task B2
Task B4
Task B5

To put this in the positive, the team is encouraged to achieve a board that looks more like this (below). Here, team members are encouraged to stay on tasks with the on-going story (or feature) until it is done. They're discouraged from starting tasks on another story. But this doesn't mean that team members become idle. They're asked to help other team members to get the story done.

Scrum Board with Focus on Getting Stories Done
To DoIn ProgressDone
Task A8
Task A6
Task A3
Task A5
Task A1
Task A2
Task A4
Task A7
Task B8
Task B6
Task B3
Task B7
Task B1
Task B2
Task B4
Task B5

Start Only with Ready Stories

I find that only a few agile teams have heard of definition of ready (DoR). Most of them have heard of definition of done (DoD), but not DoR. I've seen teams start sprints with stories that are far from being ready.

Some organizations (or companies) provide a checklist to get stories ready (not just sort of ready). I've seen the checklist include items like: estimated at the right size, prioritized by business value, has "done" criteria. It's difficult (almost impossible) to get a user story (or feature) done, if the team does not have an agreed "done" criteria. This "done" criteria (or acceptance criteria) is usually set before the story can be included in a sprint (as part of backlog refinement).

So, other than DoD, take a look at DoR. It might just help you and your team improve its productivity.

Split Stories to Fit in Sprint

Sometimes, user stories (or features) are too big to fit in a sprint (or iteration). When this happens, it would be better to split the user story, than to extend the sprint (to make it long enough for the team to complete the story).

I've witnessed teams extending their sprint cycles just to accommodate larger stories (or epics). And that's probably because they have limited skills in splitting stories. Many new agile teams attempt to split stories by architectural layers: one story for the UI, another for the database, etc. This results into stories that are not valuable to a user, and become interdependent with each other.

So, do be careful when splitting stories. Following Bill Wake's INVEST model for good user stories is highly recommended. You can refer to Richard Lawrence's Patterns for Splitting User Stories.

Conclusion

The above points may not apply to all teams. But if it does, please let me know. I would love to hear your (or your team's) experiences.

Here's wishing you more success in achieving your team's goals! "Flying Cauldron" Butterscotch Beer

Sunday, November 16, 2014

Maker-Checker Design Concept

We've seen the maker-checker concept pop-up several times in our software development experiences with banks. In this post, let me share a possible re-usable design approach. Thanks to Tin, Richie, Tina, Val, and their team, for adding their insights.

What is Maker-Checker?

According to Wikipedia:

Maker-checker (or Maker and Checker, or 4-Eyes) is one of the central principles of authorization in the Information Systems of financial organizations. The principle of maker and checker means that for each transaction, there must be at least two individuals necessary for its completion. While one individual may create a transaction, the other individual should be involved in confirmation/authorization of the same. Here the segregation of duties plays an important role. In this way, strict control is kept over system software and data keeping in mind functional division of labor between all classes of employees.

Here are some business rules we can derive from the above definition:

  1. For any transaction entry, there must be at least two individuals necessary for its completion.
  2. The one who makes the transaction entry (i.e. maker) cannot be the same one who checks (i.e. checker) it.
  3. A transaction entry is only considered completed if it has been checked.

Upon further clarification with the domain experts, we've learned the following:

  1. The checker cannot make modifications to the transaction entry. Modifications can only be done by maker.
  2. If the checker rejects the transaction entry, it should be returned back to maker (with possible comments or suggested changes). The maker can then resubmit changes later.
  3. There can be cases when the transaction entry needs another level of checking (after the first one). This would result into three individuals necessary for completion.

A typical user story for this would be something like: As a <manager>, I want to apply maker-checker policy for each <transaction> being entered, so that I can prevent fraud (or improve quality).

Possible usage scenario(s) would be something like this:

For maker:

  1. Maker submits a transaction to the system.
  2. System determines submitted transaction to be under the maker-checker policy.
  3. System stores submitted transaction as "for checking".
  4. System displays list of "for checking", "accepted", and "rejected" transactions.

For checker:

  1. Checker retrieves list of transactions "for checking".
  2. System displays list of transactions "for checking".
  3. Checker selects a transaction.
  4. System shows the transaction.
  5. Checker accepts the transaction.
  6. System records "accepted" transaction.

The alternative flow is when the checker rejects the transaction.

  1. Checker rejects the transaction.
  2. System records "rejected" transaction.

Our analysis shows that the transaction entry can have the following states:

  1. for checking,
  2. verified,
  3. and rejected.
The checker can either accept or reject the entry.

In a future post, I'll share one possible design approach for maker-checker.

Tuesday, November 11, 2014

SOA and Referential Data Integrity

One of the issues that tends to pop up is how we maintain referential integrity between services. In this post, I'd like to share my experiences on how referential data integrity between services can ruin your SOA.

Database Constraints Between Services Break Boundaries

Two of the Tenets of Service-Orientation are: "boundaries are explicit", and "services are autonomous". The first one implies that internal (private) implementation details should not be leaked outside of a service boundary. And the second one implies that services are not subservient to other services (or other pieces of code).

For purposes of discussion, let's say we have the following:

  • a "customer" service that provides customer-related business capability and persists data in a database
  • an "order" service that provides order-related business capability and persists data in the same database
  • foreign key constraints between customer-related entities/tables and order-related entities/tables. More specifically, an "order" table contains the unique ID of a customer that placed the order, and that this ID needs to exists in the "customer" table.

Do the above services (customer and order) follow the "boundaries are explicit" tenet? Are the services autonomous? Let's examine further.

The way the "customer" service persists data in a database is (private) implementation detail that is internal to it. Likewise, the way the "order" service persists data in a database is also internal to it. But how would you consider the foreign key constraints between their database tables? Is this internal implementation detail leaking outside of a service boundary? (i.e. leaking outside the "customer" service boundary and onto the "order" service boundary)

If the "order"-related database undergoes some schema changes, will it not affect the "customer"-related database schema? When deploying the schema changes to the "order" service, will it not require the "customer" service to be temporarily unavailable? (e.g. due to database restart) If "services are autonomous", how come the "customer" and "order" services are inter-dependent, such that a change in one requires a restart (or a redeploy) on the other?

Split Service, Split Database

A better approach would be to split the databases of the two services, and do away with foreign key constraints. That would allow for explicit boundaries, and autonomy. But this might be unacceptable to some people at first.

A monolithic application (left) split into services in a service-oriented architecture (right).

How could one ensure that enough customer information is received before orders are placed by that customer? In other words, how can a developer ensure that orders are placed by known customers (an existing customer ID)? Well, in SOA, services don't have to! It is not the responsibility of services to maintain this referential integrity. The responsibility of ensuring that an order is placed by a known customer lies with the process of placing orders (not in the service). It is the orchestration layer's responsibility to maintain this.

What if some customer information was modified (e.g. billing address), shouldn't the related orders be affected? Again, the process, or orchestration layer, can become responsible for this. Is this really a change in the customer? Or is it just a change in the placed order? One possible way is we define the process to say that it would copy the customer's billing information, and attach a copy (i.e. duplicate) with the placed order. This would mean that the order's bill-to (billing) address is defined when the order was placed. Another possible way is to leave the bill-to address undetermined until the order is shipped, at which the billing address as provided by the customer service shall be used.

Correlation Between Services and Context Boundaries

A useful way of thinking about this is the Domain-Driven Design notion of Bounded Context. DDD splits a complex domain up into multiple bounded contexts and maps out the relationships between them. And this results into multiple databases.

Re-usable Services

Services (in SOA) are meant to be reusable. In the example, if we reconsider what the customer service contains, we can probably design it in such a way that it doesn't have to be for the purposes of order taking. It can be designed to be reusable and allow for any party's information (e.g. persons, organizations) and not just customers. It could be re-used to store employees, since employees can become customers in the future. It could be re-used to store suppliers (since the business may need to track the suppliers of products being produced and ordered).

The (business) goals of SOA any IT initiative are:

  • increase agility (e.g. support new/changing business processes/models, reduce time to solution)
  • reduce cost (e.g. re-use business processes and/or applications, improve utility of existing/legacy application)

These goals are further translated as (technical goals):

  • increase usability (i.e. re-usability and accessibility across different applications)
  • improve maintainability
  • reduce redundancy

When we make the boundaries of services explicit and make them autonomous, we can better achieve the goals of re-usability, and reduce redundancy.

Some other services, that come to mind, can become re-usable (when properly designed and split), are:

  • Authentication and authorization - if this were (re-)written for each application, it would be a huge cost.
  • Billing (or invoicing)
  • Product Catalog

Microservices

In a previous post on SOA, I did mention that I find the term Microservices to be misleading. Although the information found on the web were good, I found them to still be unclear with their implementation. Nonetheless, I did find that they do add an exciting twist if you consider that, Microservices:

Closing Thoughts

Finally, when communicating with business people, don't let reuse become the primary measure. They probably won't understand it. Tell them that it does help save development and maintenance costs. Tell them that it provides better time to market, reduce days of inventory, reduce employee turnover, etc.

Nice root beer from Virgil's Sodas. Is this available here in the Philippines? I'm not related to the product in any way. I just saw their Ad online. And being thirsty, I thought of having one.

After this rather long post, I think it's time for a nice cold drink. Root beer anyone?

Sunday, August 17, 2014

Domain-Driven Design: Cargo Shipping Example (Part 2)

One of my colleagues asked an interesting question about my previous post that I thought would be worth posting here. He asked about the persistence of value objects. He pointed out that the Cargo had a Delivery property that was replaced (not modified). So, what happens to its persistence? Does it result into a delete and an insert? Let's see.

Let's see how a Delivery object is used in a Cargo object. In the highlighted lines, we can see that the property is indeed being assigned (not modified or updated) with a new delivery object.

public class Cargo implements Entity<Cargo> {

  private TrackingId trackingId;
  private Location origin;
  private RouteSpecification routeSpecification;
  private Itinerary itinerary;
  private Delivery delivery;

  public Cargo(final TrackingId trackingId, final RouteSpecification routeSpecification) {
    Validate.notNull(trackingId, "Tracking ID is required");
    Validate.notNull(routeSpecification, "Route specification is required");

    this.trackingId = trackingId;
    // Cargo origin never changes, even if the route specification changes.
    // However, at creation, cargo orgin can be derived from the initial route specification.
    this.origin = routeSpecification.origin();
    this.routeSpecification = routeSpecification;

    this.delivery = Delivery.derivedFrom(
      this.routeSpecification, this.itinerary, HandlingHistory.EMPTY
    );
  }

  public TrackingId trackingId() { return trackingId; }

  public Location origin() { return origin; }

  public Delivery delivery() { return delivery; }

  public Itinerary itinerary() {
    return DomainObjectUtils.nullSafe(
        this.itinerary, Itinerary.EMPTY_ITINERARY);
  }

  public RouteSpecification routeSpecification() { return routeSpecification; }

  public void specifyNewRoute(final RouteSpecification routeSpecification) {
    Validate.notNull(routeSpecification, "Route specification is required");

    this.routeSpecification = routeSpecification;
    // Handling consistency within the Cargo aggregate synchronously
    this.delivery = delivery.updateOnRouting(this.routeSpecification, this.itinerary);
  }

  public void assignToRoute(final Itinerary itinerary) {
    Validate.notNull(itinerary, "Itinerary is required for assignment");

    this.itinerary = itinerary;
    // Handling consistency within the Cargo aggregate synchronously
    this.delivery = delivery.updateOnRouting(this.routeSpecification, this.itinerary);
  }

  public void deriveDeliveryProgress(final HandlingHistory handlingHistory) {
    // TODO filter events on cargo (must be same as this cargo)

    // Delivery is a value object, so we can simply discard the old one
    // and replace it with a new
    this.delivery = Delivery.derivedFrom(routeSpecification(), itinerary(), handlingHistory);
  }
  . . .
}

Here is the Delivery value object (summarized). It doesn't have mutator methods. The class is immutable. Note that updateOnRouting() and derivedFrom() methods return new Delivery objects.

public class Delivery implements ValueObject<Delivery> {

  private TransportStatus transportStatus;
  private Location lastKnownLocation;
  private Voyage currentVoyage;
  private boolean misdirected;
  private Date eta;
  private HandlingActivity nextExpectedActivity;
  private boolean isUnloadedAtDestination;
  private RoutingStatus routingStatus;
  private Date calculatedAt;
  private HandlingEvent lastEvent;

  private static final Date ETA_UNKOWN = null;
  private static final HandlingActivity NO_ACTIVITY = null;

  /**
   * Creates a new delivery snapshot to reflect changes in routing, i.e.
   * when the route specification or the itinerary has changed
   * but no additional handling of the cargo has been performed.
   *
   * @param routeSpecification route specification
   * @param itinerary itinerary
   * @return An up to date delivery
   */
  Delivery updateOnRouting(RouteSpecification routeSpecification, Itinerary itinerary) {
    Validate.notNull(routeSpecification, "Route specification is required");

    return new Delivery(this.lastEvent, itinerary, routeSpecification);
  }

  /**
   * Creates a new delivery snapshot based on the complete handling history of a cargo,
   * as well as its route specification and itinerary.
   *
   * @param routeSpecification route specification
   * @param itinerary itinerary
   * @param handlingHistory delivery history
   * @return An up to date delivery.
   */
  static Delivery derivedFrom(
       RouteSpecification routeSpecification,
       Itinerary itinerary,
       HandlingHistory handlingHistory) {
    Validate.notNull(routeSpecification, "Route specification is required");
    Validate.notNull(handlingHistory, "Delivery history is required");

    final HandlingEvent lastEvent = handlingHistory.mostRecentlyCompletedEvent();

    return new Delivery(lastEvent, itinerary, routeSpecification);
  }

  /**
   * Internal constructor.
   */
  private Delivery(
        HandlingEvent lastEvent, Itinerary itinerary,
        RouteSpecification routeSpecification) {
    this.calculatedAt = new Date();
    this.lastEvent = lastEvent;

    this.misdirected = calculateMisdirectionStatus(itinerary);
    this.routingStatus = calculateRoutingStatus(itinerary, routeSpecification);
    this.transportStatus = calculateTransportStatus();
    this.lastKnownLocation = calculateLastKnownLocation();
    this.currentVoyage = calculateCurrentVoyage();
    this.eta = calculateEta(itinerary);
    this.nextExpectedActivity = calculateNextExpectedActivity(routeSpecification, itinerary);
    this.isUnloadedAtDestination = calculateUnloadedAtDestination(routeSpecification);
  }

  public TransportStatus transportStatus() {...}

  public Location lastKnownLocation() {...}

  public Voyage currentVoyage() {...}

  public boolean isMisdirected() {...}

  public Date estimatedTimeOfArrival() {...}

  public HandlingActivity nextExpectedActivity() {...}

  public boolean isUnloadedAtDestination() {...}

  public RoutingStatus routingStatus() {...}

  public Date calculatedAt() {...}
  . . .
}

The above entities don't use JPA annotations. So, I dug into the example's ORM which happens to be Hibernate (Cargo.hbm.xml). I saw the following:

<hibernate-mapping default-access="field">
  <class name="se.citerus.dddsample.domain.model.cargo.Cargo" table="Cargo">
    <id name="id" column="id">...</id>

    <many-to-one name="origin" column="origin_id" not-null="false" cascade="none" update="false" foreign-key="origin_fk"/>

    <component name="trackingId" unique="true" update="false">...</component>

    <component name="delivery" lazy="true">
      <property name="misdirected" column="is_misdirected" not-null="true"/>
      <property name="eta" column="eta" not-null="false"/>
      <property name="calculatedAt" column="calculated_at" not-null="true"/>
      <property name="isUnloadedAtDestination" column="unloaded_at_dest" not-null="true"/>

      <property name="routingStatus" column="routing_status" not-null="true">...</property>

      <component name="nextExpectedActivity" update="true">
        <many-to-one name="location" column="next_expected_location_id" foreign-key="next_expected_location_fk" cascade="none"/>
        <property name="type" column="next_expected_handling_event_type">...</property>
        <many-to-one name="voyage" column="next_expected_voyage_id" foreign-key="next_expected_voyage_fk" cascade="none"/>
      </component>

      <property name="transportStatus" column="transport_status" not-null="true">...</property>
      <many-to-one name="currentVoyage" column="current_voyage_id" not-null="false" cascade="none" foreign-key="current_voyage_fk"/>
      <many-to-one name="lastKnownLocation" column="last_known_location_id" not-null="false" cascade="none" foreign-key="last_known_location_fk"/>
      <many-to-one name="lastEvent" column="last_event_id" not-null="false" cascade="none" foreign-key="last_event_fk"/>
    </component>
    ...
  </class>
</hibernate-mapping>

The delivery value object is declared as a <component>. A component is a contained object that is persisted as a value type and not an entity reference. That means, it does not have its own table. Instead, its properties are mapped as columns in the surrounding <class>'s table. So here, the delivery object's properties are actually columns in the Cargo table.

So, does it result into a delete and an insert? The answer is neither. It results into an update!

Note that not all value objects will be persisted this way. The Leg value objects are deleted whenever a new Itinerary object is assigned to a Cargo. This results into Leg value objects being inserted (after the old ones are deleted). So, please check your ORM and persistence mechanisms to be sure, and see if there are any performance problems.

But why all the hassle just to use value objects? Good question. Here's what the cargo shipping example has to say:

When possible, we tend to favor Value Objects over Entities or Domain Events, because they require less attention during implementation. Value Objects can be created and thrown away at will, and since they are immutable we can pass them around as we wish.

So, I guess in the cargo shipping example, the likelihood of an itinerary being replaced is quite minimal. Thus, they didn't have performance problems even when the child leg objects were being deleted (and new ones inserted).

Monday, August 11, 2014

Purpose of SOA

After having a few more discussions about SOA (after my previous post on the topic), I happen to watch this video (again) where Anne Thomas Manes (now with Gartner via the acquisition of Burton Group) gave a talk at QCon London entitled The Business Value of SOA. Yeah, it's an old video. I'm glad to have watched it again. It helps me go back to the basics, especially with all the on-going confusion with SOA, ESB, SOAP, XML, REST, and the more recent hype around microservices (µservices). In this post, I'll be quoting Anne and writing my takeaways from her 2007 talk. Near the end, I refer to a 2009 blog post from Anne, and a more recent 2012 interview.

Why SOA?

Before I go about dealing with SOA and what it's all about, I'd like to start with why. Why SOA? In Anne's talk, she pointed out that about 80% of IT budget goes into maintenance and operations. That was 2007. In a 2013 Computerworld article entitled, How to Balance Maintenance and IT Innovation, it mentioned that a significant portion of IT budget goes to maintenance and operations:

In a recent Forrester Research survey of IT leaders at more than 3,700 companies, respondents estimated that they spend an average 72% of the money in their budgets on such keep-the-lights-on functions as replacing or expanding capacity and supporting ongoing operations and maintenance, while only 28% of the money goes toward new projects.

It seems that very little has changed. In Anne's 2007 talk, she pointed out that the problem is not the new applications, but the high cost of maintaining and operating existing applications. Why does an enterprise have so many applications? An enterprise would usually have about 20 or so core capabilities. But why would it have over 400 applications?

Anne points out that redundancy is the source of the problem. She says that there are too many applications, databases, too much money spent on boat anchors, and very little money left for innovation. The purpose of SOA is to reduce redundancy.

So, why SOA? SOA tries to reduce the cost of maintenance and operations by refactoring redundancy. Newer applications should cost less to build too, since there would be a set of services to re-use. With new applications, or newly applied changes to existing applications, organizations can cope with changes in business and become more agile.

SOA Concepts

SOA is an architecture for designing systems. … What is architecture? Architecture is a style of design. Anne said. She stressed that it is a design for systems and not applications.

She continues by explaining that in SOA, a service is the core unit of design. So what's a service?

A service is something that implements a discrete piece of functionality. It exposes its functionality through a well-defined interface.… so that applications can consume it. Anne said.

She points out that the service should be consumed by many applications. And in fact, the service should be consumed by many applications. And in fact, if you were building a service that is designed to be consumed by a single application, you're probably wasting your time. Anne said. And I actually want to stress this word consume. Applications consume services. SOA is not about application to application integration. That's how a lot of people use it today. They use SOA technologies to integrate applications. But actually your goal when you're doing SOA is to say what is the capability that is required by multiple applications. Rather than reimplementing that capability many times over in each application that needs it, you refactor the functionality into a service. And then applications consume it.

If you're figuring out which functionality/capability is required by multiple applications, just look around. Look at the existing applications. Look at your application portfolio. Identify which data and/or functionality is being used in multiple applications. Refactor them out as services!

If a functionality is required by many applications, it should be implemented as a service. And it should not be reimplemented in multiple applications.

If the phrase shared, reusable services is too vague for you or your team, then try using refactor redundancy. If the phrases suite of small services running in its own process and communicating with lightweight mechanisms, often an HTTP resource API or services built around business capabilities and independently deployable, are a bit confusing, or a bit intimidating, try using reducing redundancy.

I find the term microservices to be further misleading. I prefer to focus on the goal of reducing redundancy, than using the latest term that is in vogue.

What SOA is Not?

At about 11:20 into the video, Anne talked about what SOA is not, and touched on why business units are not thrilled about SOA. She says it is not an application-centric approach to building systems. And this is actually one of the most challenging things about SOA, because all your projects are funded based on applications. [If] you think about it, whenever there's a new project, the project that is funded by a business unit, the business unit wants you to build an application to solve a particular problem. The business unit has no interest whatsoever in having you spend extra time to go off and create a piece of capability in this application as a service, which is then gonna get consumed by some other set of business applications by some other business units.

And so there's a whole lot of business reasons, incentive reasons, why business units aren't too thrilled with the idea of you spending their money that will wind up benefiting other people.

Here, Anne touched a bit on culture and how business units behave. I've witnessed this myself. It's difficult if the incentives of a business unit is just to solve their own problems, and not cooperate with other business units to save organization-wide costs, reduce time to market, and increase competitiveness. So, SOA isn't just an IT initiative. It needs to be a business initiative.

Then Anne continues, But, that's the way people build systems today. And what they're building are monolithic applications. And when you build monolithic applications, you tend to reproduce the same functionality and comparable data in many different systems. And this duplication, this redundancy is actually the source of most of the trouble in today's IT systems.

Now, here are some more of my takeaways from Anne's talk:

  • SOA is about design, not about technology.
  • SOA is something you do, and not something you build or buy.
  • SOA is much more about culture than about technology.
  • You can't buy [SOA] governance. It's something that you do. (from OTN Archbeat Podcast: SOA Governance)

All of the above was from a talk Anne gave in 2007. Forward a little bit into early , Anne blogged that SOA is dead. Three years after that, in 2012, Anne Thomas Manes says, people are starting to actually get the architecture. Well, I'm glad to see that we're all starting to get it.

Closing Thoughts

Anne Thomas Manes did a great job peeling away all the technology, and allowed SOA's true value to emerge. For me, she was able to answer the why of SOA, which could be easily lost with all the technology and products that vendors are trying to sell.

After applying SOA to your business, you ought to be asking yourself how much redundancy you've refactored out, and how much time and money these shared services have provided the organization. For measuring the business value of SOA, please watch another talk by Anne Thomas Manes.

This post sort of takes me back in time when SOA was confusing (and probably still is), and helps me clear some things up. I hope it helps you clear things up too.

Wednesday, July 30, 2014

Domain-Driven Design: Cargo Shipping Example

I've always found the cargo shipping example used in Eric Evan's book to be quite useful in learning DDD. Here are some of my notes.

Bounded Contexts

Eric Evans did mention in his talk "What I've Learned About DDD Since the Book" at QCon London 2009, that ...in Chapter 14 of the book, I finally got around to talking about context boundaries and context maps... but putting context mapping in Chapter 14 was a mistake. It is fundamental. [You] just can't make models work without establishing context boundaries.. Evans said that the reality is that there is always more than one model and mapping these models out is crucial to setting the stage for success of a DDD project.

I believe a lot of teams are struggling with DDD because they're looking for a single, cohesive, all-inclusive model of an organization's entire business domain — you know, like an enterprise model. However, when using DDD, that is not the goal. DDD places emphasis on developing models within a context — A description of the conditions under which a particular model applies. Thus, the strategic design principle of Bounded Context.

Unfortunately, the cargo shipping example has only one bounded context — cargo shipping. I was hoping that it had two or more bounded contexts, so that it can illustrate how context mapping works.

The packages could have been named cargoshipping. But the packages were named as follows:

  • se.citerus.dddsample.application
  • se.citerus.dddsample.domain.model
  • se.citerus.dddsample.infrastructure
  • se.citerus.dddsample.interfaces

In Vernon Vaughn's book, a similar naming convention was suggested (for the "optimal purchasing" context):

  • com.mycompany.optimalpurchasing.application
  • com.mycompany.optimalpurchasing.domain.model
  • com.mycompany.optimalpurchasing.infrastructure
  • com.mycompany.optimalpurchasing.presentation

Applications

There are three applications (not three contexts) in the example:

  • Booking
  • Tracking
  • Handling (or Incident Logging)

Aggregate Roots

There is one package per aggregate, and to each aggregate belongs entities, value objects, domain events, a repository interface and sometimes factories. The aggregate roots are Cargo, HandlingEvent, Location, and Voyage.

Cargo

The cargo package has the Cargo entity as the aggregate root. It is made up of Itinerary-Leg, Delivery, and RouteSpecification.

Notice how the Cargo entity does not have getter/setter methods that you'd normally see in anemic domain entities. Instead, it has getters that are differently named (not following JavaBean naming conventions)...

public class Cargo implements Entity<Cargo> {

  private TrackingId trackingId;
  private Location origin;
  private RouteSpecification routeSpecification;
  private Itinerary itinerary;
  private Delivery delivery;

  public Cargo(
        final TrackingId trackingId,
        final RouteSpecification routeSpecification) {...}

  public TrackingId trackingId() { return trackingId; }

  public Location origin() { return origin; }

  public Delivery delivery() { return delivery; }

  public Itinerary itinerary() {
    return DomainObjectUtils.nullSafe(
        this.itinerary, Itinerary.EMPTY_ITINERARY);
  }

  public RouteSpecification routeSpecification() {
    return routeSpecification;
  }
  . . .
}

...and mutator methods that aren't named as setters.

public class Cargo implements Entity<Cargo> {

  . . .
  public void specifyNewRoute(
        final RouteSpecification routeSpecification) {...}

  public void assignToRoute(final Itinerary itinerary) {...}

  public void deriveDeliveryProgress(
        final HandlingHistory handlingHistory) {...}
  . . .
}

The life cycle of a cargo begins with the booking procedure, when the tracking id is assigned. It has an origin and a destination (via a RouteSpecification). As the cargo is being handled (transported to its destination), its delivery and transport status changes (from NOT_RECEIVED to CLAIMED).

Handling Event

Although not exactly an aggregate or entity, the HandlingEvent is a domain event used to register the handling event when, for instance, a cargo is unloaded from a carrier at some location at a given time. These events are sent from different applications some time after the event occurred and contain information about the cargo (referenced via tracking id), location, timestamp of the completion of the event, and if applicable, a voyage (referenced via voyage number).

Before I illustrate how a cargo's delivery status is updated through handling events, let me take a closer look at the code behind handling events.

public final class HandlingEvent implements DomainEvent<HandlingEvent> {

  private Type type;
  private Voyage voyage;
  private Location location;
  private Date completionTime;
  private Date registrationTime;
  private Cargo cargo;

  public enum Type implements ValueObject<Type> {
    LOAD(true),
    UNLOAD(true),
    RECEIVE(false),
    CLAIM(false),
    CUSTOMS(false); . . .
  }
  . . .
}

I find it interesting that even though HandlingEvent has a public constructor, it also has a factory — HandlingEventFactory. While the HandlingEvent constructor needs other entities like Cargo, Location, and Voyage, the factory accepts TrackingId, UnLocode, and VoyageNumber to create a new HandlingEvent. Notice how the factory uses the unique IDs of the domain entities to retrieve the entities and create the domain event object.

I also find it interesting that there's HandlingHistory to represent a list (or collection) of HandlingEvents. The list is unmodifiable (i.e. read-only). And has mostRecentlyCompletedEvent() and distinctEventsByCompletionTime() methods.

The HandlingEventRepository is also interesting, since it doesn't have all the CRUD operations. It only has a method to store a HandlingEvent, and to retrieve the handling history of a particular cargo via its unique TrackingId.

public interface HandlingEventRepository {
  void store(HandlingEvent event);
  HandlingHistory lookupHandlingHistoryOfCargo(
      TrackingId trackingId);
}

Updating a Cargo's Delivery Status

Now, let's see how a Cargo's delivery status is updated. Some might be tempted to provide a setStatus(...) method. But if we inspect the cargo shipping domain, the delivery status is based on handling events.

Cargo (aggregate root) entity has a Delivery value object that is the actual transportation of the cargo, as opposed to the customer requirement (RouteSpecification) and the plan (Itinerary). Delivery is updated via a list of handling events (HandlingHistory).

public class Cargo implements Entity<Cargo> {
  private TrackingId trackingId;
  private Location origin;
  private RouteSpecification routeSpecification;
  private Itinerary itinerary;
  private Delivery delivery;
  . . .
  public void deriveDeliveryProgress(
        final HandlingHistory handlingHistory) {
    . . .
  }
}

The delivery of a cargo is defined by its status at the last known location (e.g. on-board carrier in Hong Kong). It's also possible that the cargo is misdirected.

public class Delivery implements ValueObject<Delivery> {
  private TransportStatus transportStatus;
  private Location lastKnownLocation;
  private Voyage currentVoyage;
  private boolean misdirected;
  private Date eta;
  private HandlingActivity nextExpectedActivity;
  private boolean isUnloadedAtDestination;
  private RoutingStatus routingStatus;
  private Date calculatedAt;
  private HandlingEvent lastEvent;
  . . .
}

public enum TransportStatus implements ValueObject<TransportStatus> {
  NOT_RECEIVED, IN_PORT, ONBOARD_CARRIER, CLAIMED, UNKNOWN;
  . . .
}

I find that there's overlap between HandlingEvent, HandlingActivity, and TransportStatus. It could have been merged to simplify and clarify the model. If merged, I'd choose the name of HandlingActivity, since I find the word "event" to be a bit too technology-centric (probably influenced by "domain event"), and the word "activity" sounds more apt for the domain.

Why was HandlingEvent not part of the Cargo aggregate? Good question. At first, I was also thinking that it would be easier if the handling events were part of a cargo aggregate. According to the cargo shipping example:

The main reason for not making HandlingEvent part of the cargo aggregate is performance. HandlingEvents are received from external parties and systems, e.g. warehouse management systems, port handling systems, that call our HandlingReportService webservice implementation. The number of events can be very high and it is important that our webservice can dispatch the remote calls quickly. To be able to support this use case we need to handle the remote webservice calls asynchronously, i.e. we do not want to load the big cargo structure synchronously for each received HandlingEvent. Since all relationships in an aggregate must be handled synchronously we put the HandlingEvent in an aggregate of its own and we are able processes the events quickly and at the same time eliminate dead-locking situations in the system.

Anti-corruption Layer

The cargo shipping example may not have shown context mapping. But it did show one strategic DDD pattern — Anti-corruption Layer. Although not much explanation was put into the code, here's what I can gather from it.

A HandlingReportService interface was defined and exposed as a SOAP-based web service.

public interface HandlingReportService {
  public void submitReport(HandlingReport arg0) throws...;
}

It accepts a HandlingReport. Note that this is not the HandlingEvent domain event. HandlingReport is eventually translated to a HandlingEvent. This effectively shields the cargo shipping context's domain (layer) from being corrupted by all the external applications that would be posting handling events to the system.

<complexType name="handlingReport">
  <complexContent>
    <restriction base="anyType">
      <sequence>
        <element name="completionTime" type="xs:dateTime"/>
        <element name="trackingIds" type="xs:string" maxOccurs="unbounded"/>
        <element name="type" type="xs:string"/>
        <element name="unLocode" type="xs:string"/>
        <element name="voyageNumber" type="xs:string" minOccurs="0"/>
      </sequence>
    </restriction>
  </complexContent>
</complexType>

And since the web service needs to return as soon as possible (due to the huge number of events coming in), it does not write directly to the database. It was implemented to create a HandlingEventRegistrationAttempt that gets sent to a message queue. The message is then asynchronously consumed by HandlingEventRegistrationAttemptConsumer that delegates to HandlingEventService (an application service). HandlingEventServiceImpl (implementation) simply used the HandlingEventRepository to store the handling event. Again, notice how HandlingEventService is using unique IDs as input parameters (not domain entities).

public interface HandlingEventService {
  void registerHandlingEvent(
          Date completionTime,
          TrackingId trackingId,
          VoyageNumber voyageNumber,
          UnLocode unLocode,
          HandlingEvent.Type type)
      throws CannotCreateHandlingEventException;
}

How does cargo.deriveDeliveryProgress(HandlingHistory) get called? From HandlingEventService, the event is stored, and published by calling ApplicationEvents.cargoWasHandled(...). It, in turn, is implemented using a message queue (Yep, another message queue). The event (in the message queue) is consumed by CargoHandledConsumer which then calls CargoInspectionService. And it finally uses repositories to retrieve the cargo in question, and its related handling history.

  1. HandlingEventService -> ApplicationEvents
  2. ApplicationEvents -> CargoHandledConsumer (via JMS)
  3. CargoHandledConsumer -> CargoInspectionService
  4. CargoInspectionService -> cargo.deriveDeliveryProgress(...)

Kinda long-winded if you ask me. But looking at the implementation, I think I can understand why asynchronous events were preferred.

Ending Thoughts

Boy, was this a long post or what?!

While the cargo shipping example does leave a lot to be desired, it was still a good example. I learned a lot from it. I hope you do too. If you find other things worth noting in the cargo shipping example, please hit the comments and let me know.

Wednesday, July 9, 2014

My Lean Startup Experience (Manila)

I attended the first LSM in Manila (earlier this ). Here's my experience.

Long Hours...

It was a long two and a half days ( to ), getting out of the building, talking to strangers, thinking about ideas, listening, and trying to stay awake with lots of caffeine.

At the start of the event, attendees were asked to log their ideas if they had any. When the event started, the ones with ideas were given 50 seconds to pitch. It was hilarious!

After 20 or so ideas were pitched, we voted for the ones we wanted to support. The top 15 ideas were left. And we were asked to choose which ideas we'd like to go with, and that became our groupings.

I was hoping to have a live Skype session with Trevor Owens, Founder & CEO of Lean Startup Machine. I guess the internet connection wasn't good enough.

Validation Board

Although I've first learned about the validation board (via leanstartupmachine.com) a few years back, I wasn't clear with how it would be used to identify customers, their problems, and their solutions. Dr. Bernard Wong was able to walk us through how it is used. I find the latest version (called the Experiment Board) to be easier than the previous one.

Of course, listening to the speaker wasn't enough. Knowing is not enough. We must apply. I needed to get down and dirty, and do it! Boy, I was glad I was part of a team that felt comfy going out of the building and asking strangers.

Experiments

One of the more difficult parts of the experiment was to decide the success factor. I couldn't figure out if 5 out of 10, or 2 out of 3, would be a good success criteria. One of the coaches clarified that this is really a "gut feel". I like how he puts it. Imagine if it were your own money being invested in the idea. If the team came up with an experiment that resulted in only 5 out of 10, would you put your money into it? Or would you put your money into something that had 8 out of 10? Simple.

Lessons Learned

Here are some lessons I've learned. Some of them can be just my opinions. So, don't believe everything.

Customer and Problem First, Solutions Last

I've committed this mistake again and again. I would assume that a particular segment of the population had a certain problem that would need a solution. And before I even think about running experiments, I was already busy with the solution. Don't Assume. It makes and Ass out of U and Me

Start with the customer first. Run some experiments to see if a particular segment of the population (customers) do indeed have that problem and are in need of (i.e. already hacked, or willing to pay/buy) a solution. You don't even have to have a solution just yet. Focus on validating if the customer segment does indeed have that problem. Who knows? You might end up with a different customer segment, or a totally different problem.

In my experience, we were able to refine our customer segment. At first, we thought a bigger segment of the population had that problem. We learned that it wasn't true! In fact, a smaller, more specific, segment had that problem (for example, at first, you'd think that college students have this problem, but it turns out that people who have extra income have this problem).

Note also that you can be in a two-sided (or n-sided) market. In which case, you'll need to run more experiments to further understand the customer-problem on each side.

Pivot After the Experiment is Invalidated

When experimenting your riskiest assumption, it is possible to get failures (i.e. not meeting the success criteria). When you do, don't worry, pivot. Again, don't assume that it will fail or succeed!

When pivoting, you'll need some more insights (validated learning) from your experiments to help you pivot to your next experiment, like why some people said yes to the problem, and some said no.

Don't Get Hung Up with Your MVP

Focus on your experiments first. MVPs will only be useful after you've validated your hypothesis through experiments. So, don't get hung up with your MVP.

When Eric Ries used the term for the first time he described it as:

A Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.

(emphasis added)

Now, MVP may mean different things to different people. I just want to add that an MVP is not the only way to learn about customers.

Some people make the mistake of prematurely jumping into their MVPs and incorrectly assume that a minimum set of features will deliver value to the customer and start generating revenue (or getting paid). While I do agree, and I'm a strong advocate of the saying, charge from day one and get paid, but… make sure you've validated something first. By the time you get to your MVP (e.g. concierge, wizard of oz), you ought to have proven some of your hypothesis through experiments (e.g. customer interviews, teaser pages, etc).

Happy Validating!

Happy validating! More power to your lean startup machines! Congratulations to our team (Carlo, Mannix, Tieza, Mar Kevin)! We got first runner up! Yeah baby, yeah!

Tuesday, June 3, 2014

Quantifying Domain Model versus Transaction Script

I've been conducting training classes (with Orange and Bronze) that cover topics like TDD, design patterns (GoF), patterns of enterprise application architecture (based on PoEAA book by Martin Fowler), and others. And a question keeps coming up about comparing (and quantifying) the benefits of domain model pattern compared to transaction script. So, I thought I'd post an explanation here.

Note that the Transaction Script pattern is not bad. Fowler himself says that there are virtues to this pattern:

The glory of Transaction Script is its simplicity. Organizing logic this way is natural for applications with only a small amount of logic, and it involves very little overhead either in performance or in understanding.

It's hard to quantify the cutover level, especially when you're more familiar with one pattern than the other. You can refactor a Transaction Script design to a Domain Model design, but it's harder than it needs to be.
However much of an object bigot you become, don't rule out Transaction Script. there are a lot of simple problems out there, and a simple solution will get you up and running faster.

(PoEAA p.111-112)

Here, I used a simple banking example to illustrate the difference between Transaction Script and Domain Model patterns in organizing domain logic. Then, I'll use metrics like method lines of code, and cyclomatic complexity.

Banking Example

In the banking example, we shall implement a very simple money transfer, where an amount is transferred from one account to another.

Money Transfer Overview

The MoneyTransferService shall be implemented in two ways: one using Transaction Script, and another using Domain Model.

public interface MoneyTransferService {
  BankingTransaction transfer(
      String fromAccountId, String toAccountId, double amount);
}

Two Implementations of Money Transfer Service

Transaction Script

Using a Transaction Script design, the domain logic for transferring money between two accounts is all placed inside the MoneyTransferService implementation.

public class MoneyTransferServiceTransactionScriptImpl
      implements MoneyTransferService {
  private AccountDao accountDao;
  private BankingTransactionRepository bankingTransactionRepository;
  . . .
  @Override
  public BankingTransaction transfer(
      String fromAccountId, String toAccountId, double amount) {
    Account fromAccount = accountDao.findById(fromAccountId);
    Account toAccount = accountDao.findById(toAccountId);
    . . .
    double newBalance = fromAccount.getBalance() - amount;
    switch (fromAccount.getOverdraftPolicy()) {
    case NEVER:
      if (newBalance < 0) {
        throw new DebitException("Insufficient funds");
      }
      break;
    case ALLOWED:
      if (newBalance < -limit) {
        throw new DebitException(
            "Overdraft limit (of " + limit + ") exceeded: " + newBalance);
      }
      break;
    }
    fromAccount.setBalance(newBalance);
    toAccount.setBalance(toAccount.getBalance() + amount);
    BankingTransaction moneyTransferTransaction =
        new MoneyTranferTransaction(fromAccountId, toAccountId, amount);
    bankingTransactionRepository.addTransaction(moneyTransferTransaction);
    return moneyTransferTransaction;
  }
}

The Account entity is merely a bag of getters and setters.

// @Entity
public class Account {
  // @Id
  private String id;
  private double balance;
  private OverdraftPolicy overdraftPolicy;
  . . .
  public String getId() { return id; }
  public void setId(String id) { this.id = id; }
  public double getBalance() { return balance; }
  public void setBalance(double balance) { this.balance = balance; }
  public OverdraftPolicy getOverdraftPolicy() { return overdraftPolicy; }
  public void setOverdraftPolicy(OverdraftPolicy overdraftPolicy) {
    this.overdraftPolicy = overdraftPolicy;
  }
}
The OverdraftPolicy is an enumerated type.
public enum OverdraftPolicy {
  NEVER, ALLOWED
}

Domain Model

Using a Domain Model design, the domain logic for transferring money between two accounts is spread across. This keeps it simple and easier to maintain.

public class MoneyTransferServiceDomainModelImpl
      implements MoneyTransferService {
  private AccountRepository accountRepository;
  private BankingTransactionRepository bankingTransactionRepository;
  . . .
  @Override
  public BankingTransaction transfer(
      String fromAccountId, String toAccountId, double amount) {
    Account fromAccount = accountRepository.findById(fromAccountId);
    Account toAccount = accountRepository.findById(toAccountId);
    . . .
    fromAccount.debit(amount);
    toAccount.credit(amount);
    BankingTransaction moneyTransferTransaction =
        new MoneyTranferTransaction(fromAccountId, toAccountId, amount);
    bankingTransactionRepository.addTransaction(moneyTransferTransaction);
    return moneyTransferTransaction;
  }
}

The Account entity contains behavior and domain logic. Notice how it contains #debit(double) and #credit(double) methods, and not just getters and setters.

// @Entity
public class Account {
  // @Id
  private String id;
  private double balance;
  private OverdraftPolicy overdraftPolicy;
  . . .
  public double balance() { return balance; }
  public void debit(double amount) {
    this.overdraftPolicy.preDebit(this, amount);
    this.balance = this.balance - amount;
    this.overdraftPolicy.postDebit(this, amount);
  }
  public void credit(double amount) {
    this.balance = this.balance + amount;
  }
}

The OverdraftPolicy has two implementations that contain logic. Based on business rules, the OverdraftPolicy implementations throw exceptions to prevent the Account balance from being debited.

public interface OverdraftPolicy {
  void preDebit(Account account, double amount);
  void postDebit(Account account, double amount);
}
public class NoOverdraftAllowed implements OverdraftPolicy {
  public void preDebit(Account account, double amount) {
    double newBalance = account.balance() - amount;
    if (newBalance < 0) {
      throw new DebitException("Insufficient funds");
    }
  }
  public void postDebit(Account account, double amount) {
  }
}
public class LimitedOverdraft implements OverdraftPolicy {
  private double limit;
  . . .
  public void preDebit(Account account, double amount) {
    double newBalance = account.balance() - amount;
    if (newBalance < -limit) {
      throw new DebitException(
          "Overdraft limit (of " + limit + ") exceeded: " + newBalance);
    }
  }
  public void postDebit(Account account, double amount) {
  }
}

Metrics

Now here are some of the metrics (via Eclipse Metrics Plugin).

Transaction ScriptDomain Model
MetricMaximumMaximum
McCabe Cyclomatic Complexity52
Number of Classes46
Method Lines of Code259
TotalTotal
Total Lines of Code8296

Now here are the metrics screenshots for transaction script
and domain model.

Conclusion

The resulting overall lines of code are almost the same. The Domain Model pattern produces more classes, and simpler methods.

There are more things to compare than just lines of code and cyclomatic complexity. For example, the Domain Model pattern needs more OO design skill, and Transaction Script pattern is so easy to implement.

The good thing is, there's no need to make a decision up-front. One can always start with a Transaction Script (i.e. do the simplest thing that could possibly work), and when complexity starts to set in, it can be refactored to have richer domain entities, and work its way to using a Domain Model pattern.

Let me know (via comments) if anyone wants to see the code. I can upload it to GitHub.

Thursday, May 15, 2014

Service-Orientation, Object-Orientation, SOA, and DDD

With some recent discussions on SOA, ESB, and managing its complexity, I thought I'd write about some relevant points (at least, the ones I find quite relevant) and some questions.

Web Services and SOA

How do you know you're doing SOA? The common answer is, when you're using web services. Unfortunately, web services have nothing to do with SOA.

WTF?!? SOA != Web services

Web services were not created because of SOA! Yup, that's right. It was not created because of SOA.

SOA was first coined by Gartner back in 1996 (see note), long before any web services technology were even developed.

Web services were created to get heterogeneous enterprise packages and systems (mostly from big vendors like Oracle, IBM, Microsoft) to talk to each other in an interoperable way. The W3C started the web services group sometime in 2002 to do just that, and they came up with the web services standards that we know today (e.g. SOAP, WSDL).

W3C was not thinking of SOA when the web services group was developing standards. Unfortunately, the industry mistakenly thought that web services are for service-oriented architecture (SOA). Simply because of the word "service"—web services, service-oriented architecture. Duh?

Tenets of Service Orientation

Like object-orientation, the industry gave service-orientation a spin and (by doing) learned some lessons. In doing OO, we've come to understand that it's not just about inheritance. We've come to favor composition over inheritance, and we program to an interface, not to an implementation. In doing service-orientation, we've come to apply some design guidelines. One of them is Don Box's Four Tenets of Service Orientation. It originally appeared back in 2004 when Don Box published an article on MSDN called "A Guide to developing and Running Connected Systems with Indigo" (Indigo is what's known today as Windows Communication Foundation or WCF for short).

Don defined the following four (4) tenets:

  • Boundaries are explicit
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatibility is determined based on policy

Although it is showing its age, I still find it quite relevant. I think most people make the mistake of focusing on the technology of SOA, and forgetting about the underlying design principles of service-orientation. It seems that most are making the mistake of using some kind of ESB (or some other SOA-based tool) as a reason an excuse to implement SOA correctly.

One of the tenets indicate that (service) boundaries are explicit. So, how do we mark the boundaries of a service? What's a service in the first place?

The word 'boundary' (in boundaries are explicit) reminded me of a website (Should I Use SOA?) that Edge showed me a while back. It will ask you if you're Amazon. If you answered 'No', it shows a page that says you should not be using SOA, as it is too early. It goes on to explain something about separating services, getting the proper breakdown, and aligning team boundaries with service boundaries.

In service-orientation, I like Udi Dahan's definition of a service in service-orientation.

A service is the technical authority for a specific business capability.

Any piece of data or rule must be owned by only one service.

- The Known Unknowns of SOA - from Udi Dahan's Blog

It's really about business capability. And there is a strong emphasis on encapsulation such that any piece of data or rule must be owned by only one service.

So, if in my organization, we have the business capability of showing a catalog of products, and that is separate from pricing, does this mean we have to mark the boundaries along those lines of separation?

Or say, if in an organization, the HR owns the employee and employment data, and another group owns the customer data, does this mean that we have a service for HR to own all employee/employment data and rules, while another service owns all the customer data? What if a former employee becomes a customer, or vice-versa?

Here's another thought. Since services in service-orientation are built around a specific business capability, I remember reading an interview with Werner Vogels, Chief Technology Officer at Amazon where he mentions that Amazon applies the motto You build it, you run it. Here, the team that develops the product (is this a service? a business capability?) is responsible for maintaining it in production for its entire life-cycle. All products are services managed by the teams that built them. The team is dedicated to each product throughout its life-cycle and the organization is built around product management instead of project management.

There is another lesson here: Giving developers operational responsibilities has greatly enhanced the quality of the services, both from a customer and a technology point of view. The traditional model is that you take your software to the wall that separates development and operations, and throw it over and then forget about it. Not at Amazon. You build it, you run it. This brings developers into contact with the day-to-day operation of their software. It also brings them into day-to-day contact with the customer. This customer feedback loop is essential for improving the quality of the service.

- an interview with Werner Vogels, Chief Technology Officer at Amazon

If service-orientation is about business capability, wouldn't DDD (as it emphasizes business-domain) be appropriate to determine the 'boundaries' and the separation?

If services are autonomous, does it mean it shouldn't be relying on other services to function properly?

When I read services share schema and contract, not class, I ask myself, isn't this programming to an interface, not to an implementation?

Are your (SOA) services following (or breaking) any of the four (4) tenets?

There's so much more to service-orientation than I could write in this post. I hope to write more in the coming months.

Service-Orientation vs. Object-Orientation

Don Box explains:

Service-orientation is an important complement to object-orientation that applies the lessons learned from component software, message-oriented middleware and distributed object computing. Service-orientation differs from object-orientation primarily in how it defines the term "application." Object-oriented development focuses on applications that are built from interdependent class libraries. Service-oriented development focuses on systems that are built from a set of autonomous services.

- A Guide to Developing and Running Connected Systems with Indigo
(Indigo is what's known today as Windows Communication Foundation or WCF for short)

Another point of difference is with how messages are communicated. In service-orientation, services send and receive data (no attached behavior). In OO, objects can send messages that contain data and behavior. In service-orientation, since services can be implemented using different platforms, it can only safely send/receive data (without behavior).

In Closing

I still have a lot to learn in service-orientation and in SOA. Writing this post has helped me clear things up. I think it's good to see that the industry is building on good things that worked (like service-orientation emphasizing on boundaries and autonomy, which I think was building on top of OO's encapsulation). I hope to write more about this as I do more SOA.