JavaFunctional streams

Stream pipelines

The most used operations filter, map and reduce (in various forms) are often united in stream pipelines together with other operations.

Let's consider some examples.

Example 1. The total count of words starting with "JA"

Given a list of strings named words. We'd like to count the total number of words that start with "JA" ignoring the case ("ja", "jA", "Ja" and "JA" are suitable).

Here is a solution that uses map, filter and count operations.

long totalNumberOfWordsStartingWithJA = words.stream()
        .map(String::toUpperCase)         // convert all words to the upper case
        .filter(s -> s.startsWith("JA"))  // filtering words using a prefix
        .count();  // the terminal operation, it counts the suitable words

The pipeline of three operations (methods) is really powerful. As a result, we have a clear and concise solution.

Example 2. Getting names of events

The class Event represents a public event (a conference, a film premiere or a concert). It has two fields:
  • localDate beginning - a date when the event happens;
  • String name - a name of the event (for instance, "JavaOne - 2017").

Also, the class has getters and setters for each field with the corresponding names.

We have a list of instances named events.

Let's find all names of events that will occur from December 30 to December 31, 2017 (inclusively).

LocalDate after = LocalDate.of(2017, 12, 29);
LocalDate before = LocalDate.of(2018, 1, 1);
        
List<String> suitableEvents = events.stream()
        .filter(e -> e.getBeginning().isAfter(after) && e.getBeginning().isBefore(before))
        .map(Event::getName)
        .collect(Collectors.toList());

The code above finds names of all suitable events and collect them to a new list of strings.

Example 3. The results of experiments

The result of an experiment is a double number from 0.0 to 1.0. Here is an instance List<Double> named results that store a collection of the experiment's results.

Let's calculate the average value of results that belong to the range from 0.4 to 0.5.

double low = 0.4;
double high = 0.5;

double avg = results.stream()
        .mapToDouble(r -> r) // transforming this Stream<Double> to DoubleStream
        .filter(r -> r > low && r < high) // filtering the stream
        .average() // evaluating the average of the suitable elements
        .orElse((low + high) / 2); // get a value as default if no results

First, each Double is transformed to double (primitive). Then, the elements are filtered by the range condition. After, the average value is evaluated. If the stream doesn't contain any values, the method orElse returns the default value.

As you have already seen, stream pipelines allow writing short and readable code to perform various evaluations.

Mapping and reducing functions

As functions are presented as objects we can map and reduce them like regular values.

For example, we have a collection of integer predicates. Let's negate each predicate using a map operator and then conjunct all predicates into one using a reduce operator.

public static IntPredicate negateEachAndConjunctAll(Collection<IntPredicate> predicates) {
    return predicates.stream()
            .map(IntPredicate::negate)
            .reduce(n -> true, IntPredicate::and);
}

In this example, map negates each predicate in a stream and then reduce conjuncts all predicates into one. The initial value (seed) of reducing is a predicate that is always true, because it's the neutral value for a conjunction.

So, the input predicates P1(x),P2(x),...,Pn(x) P_1(x), P_2(x), ..., P_n(x) will be reduced into one predicate Q(x)=¬P1(x)&¬P2(x)&...&¬Pn(x) Q(x) = \neg P_1(x) \& \neg P_2(x) \& ... \& \neg P_n(x) .

How did you like the theory?
Report a typo