Spark custom streaming dropping most of the data

回眸只為那壹抹淺笑 提交于 2019-12-13 07:58:18

问题


I'm following the example for spark streaming using customer receiver as given in the spark site available at Spark customer receiver.

However, the job seems to drop most my data. Whatever the amount of data I stream, it is successfully received at the consumer. However, when I do any map/ flatmap operations on it, I just see 10 rows of data. This is always the case no matter how much data I stream.

I have modified this program to read from ActiveMQ queue. If I look at ActiveMQ web interface, the spark job successfully consumes all the data I generate. HOWEVER, only 10 data per batch is processed. I tried changing batch size to various values and tried it on local as well as 6 node spark cluster - everywhere the same result.

It's really frustrating as I don't know why a limited amount of data is being processed. Is there something I'm missing here ?

This is my spark program. The custom receiver is included. Also I'm not really creating any socket connection. Instead, I'm hard coding the message for test purposes. Behaves the same as when socket connection is created for stream.

/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package com.rzt.main;

import com.google.common.collect.Lists;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.storage.StorageLevel;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.receiver.Receiver;
import scala.Tuple2;

import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.ConnectException;
import java.net.Socket;
import java.util.regex.Pattern;

/**
 * Custom Receiver that receives data over a socket. Received bytes is
 * interpreted as text and \n delimited lines are considered as records. They
 * are then counted and printed.
 *
 * Usage: TestReceiv3 <master> <hostname> <port> <master> is the Spark master
 * URL. In local mode, <master> should be 'local[n]' with n > 1. <hostname> and
 * <port> of the TCP server that Spark Streaming would connect to receive data.
 *
 * To run this on your local machine, you need to first run a Netcat server `$
 * nc -lk 9999` and then run the example `$ bin/run-example
 * org.apache.spark.examples.streaming.TestReceiv3 localhost 9999`
 */

public class TestReceiv3 extends Receiver<String> {
    private static final Pattern SPACE = Pattern.compile(" ");

    public static void main(String[] args) {

        // Create the context with a 1 second batch size
        SparkConf sparkConf = new SparkConf().setAppName("TestReceiv3").setMaster("local[4]");
        JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, new Duration(1000));

        // Create a input stream with the custom receiver on target ip:port and
        // count the
        // words in input stream of \n delimited text (eg. generated by 'nc')
        JavaReceiverInputDStream<String> lines = ssc.receiverStream(new TestReceiv3("TEST", 1));
        JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
            public Iterable<String> call(String x) {
                System.out.println("Message received" + x);
                return Lists.newArrayList(x);
            }
        });

        words.print();
        ssc.start();
        ssc.awaitTermination();
    }

    // ============= Receiver code that receives data over a socket
    // ==============

    String host = null;
    int port = -1;

    public TestReceiv3(String host_, int port_) {
        super(StorageLevel.MEMORY_AND_DISK_2());
        host = host_;
        port = port_;
    }

    public void onStart() {
        // Start the thread that receives data over a connection
        new Thread() {
            @Override
            public void run() {
                receive();
            }
        }.start();
    }

    public void onStop() {
        // There is nothing much to do as the thread calling receive()
        // is designed to stop by itself isStopped() returns false
    }

    /** Create a socket connection and receive data until receiver is stopped */
    private void receive() {
        Socket socket = null;
        String userInput = null;

        try {

            int i = 0;
            // Until stopped or connection broken continue reading
            while (true) {
                i++;
                store("MESSAGE " + i);
                if (i == 1000)
                    break;
            }

            // Restart in an attempt to connect again when server is active
            // again
            restart("Trying to connect again");
        } catch (Throwable t) {
            restart("Error receiving data", t);
        }
    }
}

回答1:


The output you are seeing is coming from words.print(). DStream.print only prints the first 10 elements of the DStream. From the docs:

def print(): Unit

Print the first ten elements of each RDD generated in this DStream. This is an output operator, so this DStream will be registered as an output stream and there materialized.

You will need to store the streaming data somewhere (like using DStream.saveAsTextFiles(...) in order to inspect it in its totality.



来源:https://stackoverflow.com/questions/27417092/spark-custom-streaming-dropping-most-of-the-data

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!