I can create a recursive closure:
static IntUnaryOperator fibo;
fibo =
(i) ->
i<2 ? 1 : fibo.applyAsInt(i-1)+ fibo.applyAsInt(i-2);
It seems you are asking for something like this:
public class Fibonacci extends AbstractList<BigInteger> {
@Override
public Stream<BigInteger> stream() {
return Stream.iterate(new BigInteger[]{ BigInteger.ONE, BigInteger.ONE },
p->new BigInteger[]{ p[1], p[0].add(p[1]) }).map(p -> p[0]);
}
@Override
public Iterator<BigInteger> iterator() {
return stream().iterator();
}
@Override
public int size() {
return Integer.MAX_VALUE;
}
@Override
public BigInteger get(int index) {
return stream().skip(index).findFirst().get();
}
}
It’s accessible via the List
interface (it doesn’t implement RandomAccess
for a good reason), thus, you may ask for the n’th value via get(n)
. Note that the implementation of get
hints how you can get values at positions after Integer.MAX_VALUE
. Just use stream().skip(position).findFirst().get()
.
Beware! This list is infinite, as you asked for. Don’t ask it for things that operate on all elements, e.g. not even toString()
. But things like the following will work smoothly:
System.out.println(new Fibonacci().subList(100, 120));
or
for(BigInteger value: new Fibonacci()) {
System.out.println(value);
if(someCondition()) break;
}
However, when you have to process large sequences of elements and want to do it efficiently, you should ensure to work on the iterator or stream to avoid O(n²)
complexity of repeated get
calls.
Note that I changed the element type to BigInteger
as it would be pointless to think about infinite streams when it comes to the Fibonacci sequence and the int
or long
value type. Even with the long
value type, the sequence is over after only 92 values as then, overflow occurs.
Update: now that you made clear that you are looking for a lazy storage, you may change the class above as follows:
public class Fibonacci extends AbstractList<BigInteger> {
final Map<BigInteger,BigInteger> values=new HashMap<>();
public Fibonacci() {
values.put(BigInteger.ONE, BigInteger.ONE);
values.put(BigInteger.ZERO, BigInteger.ONE);
}
@Override
public BigInteger get(int index) {
return get(BigInteger.valueOf(index));
}
public BigInteger get(BigInteger index) {
return values.computeIfAbsent(index, ix ->
get(ix=ix.subtract(BigInteger.ONE)).add(get(ix.subtract(BigInteger.ONE))));
}
@Override
public Stream<BigInteger> stream() {
return Stream.iterate(BigInteger.ZERO, i->i.add(BigInteger.ONE)).map(this::get);
}
@Override
public Iterator<BigInteger> iterator() {
return stream().iterator();
}
@Override
public int size() {
return Integer.MAX_VALUE;
}
}
I used BigInteger
as key/index here to fulfill the requirement to be (theoretically) infinite, though we can use a long
key as well for all practical uses. The key point is the initially empty storage: (now exemplary using long
):
final Map<Long,BigInteger> values=new HashMap<>();
which is pre-initialized with the values that should end each recursion (unless it ends earlier due to already computed values):
values.put(1L, BigInteger.ONE);
values.put(0L, BigInteger.ONE);
Then, we can ask for a lazily computed value via:
public BigInteger get(long index) {
return values.computeIfAbsent(index, ix -> get(ix-1).add(get(ix-2)));
}
or a stream delegating to the get
method described above:
LongStream.range(0, Long.MAX_VALUE).mapToObj(this::get);
This creates a stream that is only “practically infinite” whereas the complete example class above, using BigInteger
is theoretically infinite…
The Map
will remember every computed value of the sequence.
The solution will be created as a class FunctionalSequence
for representation of a lazy, infinite sequence of objects, defined by a lambda function with integer argument. The function can be iterative or not. For the iterative case the FunctionalSequence
class will have a method initialize
for setting the start values.
The declaration of an object of such class will look so:
FunctionalSequence<BigInteger> fiboSequence = new FunctionalSequence<>();
fiboSequence.
initialize(Stream.of(BigInteger.ONE,BigInteger.ONE)).
setSequenceFunction(
(i) ->
fiboSequence.get(i-2).add(fiboSequence.get(i-1))
);
Notice, as in the recursive lambda example in the question, we cannot declare the object and define it recursively in one operator. One operator for declaration, another for definition.
The FunctionalSequence
class definition:
import java.util.Iterator;
import java.util.LinkedList;
import java.util.stream.Stream;
public class FunctionalSequence<T> implements Iterable<T>{
LinkedList<CountedFlighweight<T>> realList = new LinkedList<>();
StackOverflowingFunction<Integer, T> calculate = null;
public FunctionalSequence<T> initialize(Stream<T> start){
start.forEachOrdered((T value) ->
{
realList.add(new CountedFlighweight<>());
realList.getLast().set(value);
});
return this;
}
public FunctionalSequence<T> setSequenceFunction(StackOverflowingFunction<Integer, T> calculate){
this.calculate = calculate;
return this;
}
@Override
public Iterator<T> iterator() {
return new SequenceIterator();
}
public T get(int currentIndex) throws StackOverflowError{
if(currentIndex < 0) return null;
while (currentIndex >= realList.size()){
realList.add(new CountedFlighweight<T>());
}
try {
return (T) realList.get(currentIndex).get(calculate, currentIndex);
} catch (Exception e) {
return null;
}
}
public class SequenceIterator implements Iterator<T>{
int currentIndex;
@Override
public boolean hasNext() {
return true;
}
@Override
public T next() {
T result = null;
if (currentIndex == realList.size()){
realList.add(new CountedFlighweight<T>());
}
// here the StackOverflowError catching is a pure formality, by next() we would never cause StackOverflow
try {
result = realList.get(currentIndex).get(calculate, currentIndex);
} catch (StackOverflowError e) {
}
currentIndex++;
return result;
}
}
/**
* if known is false, the value of reference is irrelevant
* if known is true, and reference is not null, reference contains the data
* if known is true, and reference is null, that means, that the appropriate data are corrupted in any way
* calculation on corrupted data should result in corrupted data.
* @author Pet
*
* @param <U>
*/
public class CountedFlighweight<U>{
private boolean known = false;
private U reference;
/**
* used for initial values setting
*/
private void set(U value){
reference = value;
known = true;
}
/**
* used for data retrieval or function counting and data saving if necessary
* @param calculate
* @param index
* @return
* @throws Exception
*/
public U get(StackOverflowingFunction<Integer, U> calculate, int index) throws StackOverflowError{
if (! known){
if(calculate == null) {
reference = null;
} else {
try {
reference = calculate.apply(index);
} catch (Exception e) {
reference = null;
}
}
}
known = true;
return reference;
}
}
@FunctionalInterface
public interface StackOverflowingFunction <K, U> {
public U apply(K index) throws StackOverflowError;
}
}
As the recursive function could easily meet the StackOverflowError, we should organize the recursion so that in that case the whole recursive sequence will roll back without any changes really met and throw the exception.
The use of the FunctionalSequence could look so:
// by iterator:
int index=0;
Iterator<BigInteger> iterator = fiboSequence.iterator();
while(index++<10){
System.out.println(iterator.next());
}
Or so:
static private void tryFibo(FunctionalSequence<BigInteger> fiboSequence, int i){
long startTime = System.nanoTime();
long endTime;
try {
fiboSequence.get(i);
endTime = System.nanoTime();
System.out.println("repeated timing for f("+i+")=" + (endTime-startTime)/1000000.+" ns");
} catch (StackOverflowError e) {
endTime = System.nanoTime();
//e.printStackTrace();
System.out.println("failed counting f("+i+"), time=" + (endTime-startTime)/1000000.+" ns");
}
}
The last function can be used in the following way:
tryFibo(fiboSequence, 1100);
tryFibo(fiboSequence, 100);
tryFibo(fiboSequence, 100);
tryFibo(fiboSequence, 200);
tryFibo(fiboSequence, 1100);
tryFibo(fiboSequence, 2100);
tryFibo(fiboSequence, 2100);
tryFibo(fiboSequence, 1100);
tryFibo(fiboSequence, 100);
tryFibo(fiboSequence, 100);
tryFibo(fiboSequence, 200);
tryFibo(fiboSequence, 1100);
Here are the results (the stack was limited to 256K for the needs of testing):
1
1
2
3
5
8
13
21
34
55
failed counting f(1100), time=3.555689 ns
repeated timing for f(100)=0.213156 ns
repeated timing for f(100)=0.002444 ns
repeated timing for f(200)=0.266933 ns
repeated timing for f(1100)=5.457956 ns
repeated timing for f(2100)=3.016445 ns
repeated timing for f(2100)=0.001467 ns
repeated timing for f(1100)=0.005378 ns
repeated timing for f(100)=0.002934 ns
repeated timing for f(100)=0.002445 ns
repeated timing for f(200)=0.002445 ns
repeated timing for f(1100)=0.003911 ns
Look, the repeatable call of the f(i) for the same index takes practically no time - no iterations were made. We cannot reach f(1100) at once because of the StackOverflowError. But after we have reached once f(200), f(1100) becomes reachable. We made it!
I cannot think up a good general solution, but if you want to access specifically two previous elements, this could be done in quite easy way defining the custom Spliterator
like this:
public static IntStream iterate(int first, int second, IntBinaryOperator generator) {
Spliterator.OfInt spliterator = new AbstractIntSpliterator(Long.MAX_VALUE,
Spliterator.ORDERED) {
int prev1 = first, prev2 = second;
int pos = 0;
@Override
public boolean tryAdvance(IntConsumer action) {
if(pos < 2) {
action.accept(++pos == 1 ? prev1 : prev2);
} else {
int next = generator.applyAsInt(prev1, prev2);
prev1 = prev2;
prev2 = next;
action.accept(next);
}
return true;
}
};
return StreamSupport.intStream(spliterator, false);
}
Usage:
iterate(1, 1, Integer::sum).limit(20).forEach(System.out::println);