方案一:使用reduceByKey

数据word.txt

张三
李四
王五
李四
王五
李四
王五
李四
王五
王五
李四
李四
李四
李四
李四

代码:

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.rdd.RDD;
import org.apache.spark.sql.SparkSession;
import scala.Tuple2; public class HelloWord {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder().master("local[*]").appName("Spark").getOrCreate();
final JavaSparkContext ctx = JavaSparkContext.fromSparkContext(spark.sparkContext()); RDD<String> rdd = spark.sparkContext().textFile("C:\\Users\\boco\\Desktop\\word.txt", 1);
JavaRDD<String> javaRDD = rdd.toJavaRDD(); JavaPairRDD<String, Integer> javaRDDMap = javaRDD.mapToPair(new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
}); JavaPairRDD<String, Integer> result = javaRDDMap.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer integer, Integer integer2) throws Exception {
return integer + integer2;
}
}); System.out.println(result.collect());
}
}

输出:

[(张三,1), (李四,9), (王五,5)]

方案二:使用spark sql

使用spark sql实现代码:

import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType; import java.util.ArrayList; public class HelloWord {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder().master("local[*]").appName("Spark").getOrCreate();
final JavaSparkContext ctx = JavaSparkContext.fromSparkContext(spark.sparkContext()); JavaRDD<Row> rows = spark.read().text("C:\\Users\\boco\\Desktop\\word.txt").toJavaRDD(); ArrayList<StructField> fields = new ArrayList<StructField>();
StructField field = null;
field = DataTypes.createStructField("key", DataTypes.StringType, true);
fields.add(field); StructType schema = DataTypes.createStructType(fields); Dataset<Row> ds = spark.createDataFrame(rows, schema); ds.createOrReplaceTempView("words"); Dataset<Row> result = spark.sql("select key,count(0) as key_count from words group by key"); result.show();
}
}

结果:

+---+---------+
|key|key_count|
+---+---------+
| 王五| 5|
| 李四| 9|
| 张三| 1|
+---+---------+

方案二:使用spark streaming实时流分析

参考《http://spark.apache.org/docs/latest/streaming-programming-guide.html》

First, we create a JavaStreamingContext object, which is the main entry point for all streaming functionality. We create a local StreamingContext with two execution threads, and a batch interval of 1 second.

import org.apache.spark.*;
import org.apache.spark.api.java.function.*;
import org.apache.spark.streaming.*;
import org.apache.spark.streaming.api.java.*;
import scala.Tuple2; // Create a local StreamingContext with two working thread and batch interval of 1 second
SparkConf conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount");
JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(1));

Using this context, we can create a DStream that represents streaming data from a TCP source, specified as hostname (e.g. localhost) and port (e.g. 9999).

// Create a DStream that will connect to hostname:port, like localhost:9999
JavaReceiverInputDStream<String> lines = jssc.socketTextStream("localhost", 9999);

This lines DStream represents the stream of data that will be received from the data server. Each record in this stream is a line of text. Then, we want to split the lines by space into words.

// Split each line into words
JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator());

flatMap is a DStream operation that creates a new DStream by generating multiple new records from each record in the source DStream. In this case, each line will be split into multiple words and the stream of words is represented as the words DStream. Note that we defined the transformation using a FlatMapFunction object. As we will discover along the way, there are a number of such convenience classes in the Java API that help defines DStream transformations.

Next, we want to count these words.

// Count each word in each batch
JavaPairDStream<String, Integer> pairs = words.mapToPair(s -> new Tuple2<>(s, 1));
JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey((i1, i2) -> i1 + i2); // Print the first ten elements of each RDD generated in this DStream to the console
wordCounts.print();

The words DStream is further mapped (one-to-one transformation) to a DStream of (word, 1) pairs, using a PairFunction object. Then, it is reduced to get the frequency of words in each batch of data, using a Function2 object. Finally, wordCounts.print() will print a few of the counts generated every second.

Note that when these lines are executed, Spark Streaming only sets up the computation it will perform after it is started, and no real processing has started yet. To start the processing after all the transformations have been setup, we finally call start method.

jssc.start();              // Start the computation
jssc.awaitTermination(); // Wait for the computation to terminate

The complete code can be found in the Spark Streaming example JavaNetworkWordCount.

If you have already downloaded and built Spark, you can run this example as follows. You will first need to run Netcat (a small utility found in most Unix-like systems) as a data server by using

$ nc -lk 9999

Then, in a different terminal, you can start the example by using

$ ./bin/run-example streaming.JavaNetworkWordCount localhost 9999

完整代码:

import java.util.Arrays;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext; import scala.Tuple2; public class HelloWord {
public static void main(String[] args) throws InterruptedException {
// Create a local StreamingContext with two working thread and batch interval of
// 1 second
SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("NetworkWordCount");
JavaSparkContext jsc=new JavaSparkContext(conf);
jsc.setLogLevel("WARN");
JavaStreamingContext jssc = new JavaStreamingContext(jsc, Durations.seconds(60)); // Create a DStream that will connect to hostname:port, like localhost:9999
JavaReceiverInputDStream<String> lines = jssc.socketTextStream("xx.xx.xx.xx", 19999); // Split each line into words
JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator()); // Count each word in each batch
JavaPairDStream<String, Integer> pairs = words.mapToPair(s -> new Tuple2<>(s, 1));
JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey((i1, i2) -> i1 + i2); // Print the first ten elements of each RDD generated in this DStream to the
// console
wordCounts.print(); jssc.start(); // Start the computation
jssc.awaitTermination(); // Wait for the computation to terminate
}
}

测试:

[root@abced dx]# nc -lk
hellow wrd
hello word
hello word
hello dkk
hl
hello
hello
hello word
hello word
hello java
hello c@
hello hadoop]
hello spark
hello word
hello kafka
hello c
hello c#
hello .net core
net cre
workd
hle
hello words
hke hjh
hek
hel
hl3
hhk
hke

程序执行结果:

-------------------------------------------
Time: ms
-------------------------------------------
(c,)
(spark,)
(kafka,)
(c#,)
(hello,)
(java,)
(c@,)
(hadoop],)
(word,) // :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
-------------------------------------------
Time: ms
-------------------------------------------
(hle,)
(words,)
(.net,)
(hello,)
(workd,)
(cre,)
(net,)
(core,) // :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN BlockManager: Block input-- replicated to only peer(s) instead of peers
-------------------------------------------
Time: ms
-------------------------------------------
(,)
(hhk,)
(hek,)
(hel,)
(,)
(hjh,)
(,)
(hke,)
(,)
(hl3,)

结论:是一批一批的处理的,不进行累加,每一批统计并不是累加之前的数据,而是针对当前接收到这一批数据的处理。

最新文章

  1. 【BZOJ-4519】不同的最小割 最小割树(分治+最小割)
  2. emmet-vim
  3. JavaScript的作用域和提升机制
  4. 连接Excel文件时,未在本地计算机上注册“Microsoft.Jet.OLEDB.4.0”提供程序
  5. 关于文件的复制(用InputStream和OutputStream)
  6. SQL分页语句总结
  7. protocolbuffer数据交换格式说明
  8. [RxJS] Reactive Programming - Clear data while loading with RxJS startWith()
  9. 【Leetcod】Unique Binary Search Trees II
  10. ASP.NET Core Authorization
  11. Java面试宝典2014版
  12. PhpStorm创建Drupal模块项目开发教程(5)
  13. (面试题)python面试题集锦-附答案
  14. 【转】RO段、RW段和ZI段 --Image$$??$$Limit 含义(zz)
  15. socket.io常用api
  16. Linux获取进程执行时间
  17. Python爬取猫眼top100排行榜数据【含多线程】
  18. java poi给sheet表格中的某个单元格添加批注
  19. Spring Framework 4.0.0发布,首次支持Java 8
  20. 把Java数组转换为List时的注意事项

热门文章

  1. 谨慎注意WebBrowser控件的DocumentCompleted事件
  2. godaddy 亚太机房 更换 美国机房 全过程(图)
  3. NSArray进行汉字排序
  4. 内存映射函数remap_pfn_range学习——示例分析(1)
  5. poi excel导入导出
  6. C#编程(十九)----------部分类
  7. POJ 2135 Farm Tour &amp;amp;&amp;amp; HDU 2686 Matrix &amp;amp;&amp;amp; HDU 3376 Matrix Again 费用流求来回最短路
  8. javax.servlet不存在问题的解决
  9. iTunes Connect开发者指南中的一个疑问
  10. Java并发编程的艺术(三)——volatile