Spark和Cassandra Java应用程序:线程中的异常" Main" Java.lang.noclassdeffounderror:org/apache/spark/spar/sql/dataset[英] Spark and Cassandra Java application: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/Dataset

问题描述

我得到了一个惊人的siplme java应用程序,我几乎从这个示例中复制了它: http://markmail.org/download.xqy?id=zua6upabiylzeetp&number=2

我要做的就是读取表数据并在Eclipse控制台中显示.

我的pom.xml:

        <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>chat_connaction_test</groupId>
  <artifactId>ChatSparkConnectionTest</artifactId>
  <version>0.0.1-SNAPSHOT</version>
 <dependencies> 
    <dependency>
    <groupId>com.datastax.cassandra</groupId>
    <artifactId>cassandra-driver-core</artifactId>
    <version>3.1.0</version>
    </dependency>

    <dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.10</artifactId>
    <version>2.0.0</version>
    </dependency>

    <!-- https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector_2.10 -->
    <dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector_2.10</artifactId>
    <version>2.0.0-M3</version>
    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming_2.10 -->
    <dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming_2.10</artifactId>
    <version>2.0.0</version>
    </dependency>
    <!--
    <dependency> 
    <groupId>org.apache.spark</groupId> 
    <artifactId>spark-hive_2.10</artifactId> 
    <version>1.5.2</version> 
    </dependency>
    -->
  </dependencies>
</project>

和我的Java代码:

    package com.chatSparkConnactionTest;

import static com.datastax.spark.connector.japi.CassandraJavaUtil.javaFunctions;
import java.io.Serializable;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import com.datastax.spark.connector.japi.CassandraRow;

public class JavaDemo implements Serializable {
    private static final long serialVersionUID = 1L;
    public static void main(String[] args) {

        SparkConf conf = new SparkConf().
            setAppName("chat").
            setMaster("local").
            set("spark.executor.memory","1g").
            set("spark.cassandra.connection.host", "127.0.0.1");
        JavaSparkContext sc = new JavaSparkContext(conf);

        JavaRDD<String> cassandraRowsRDD = javaFunctions(sc).cassandraTable(
            "chat", "dictionary")

            .map(new Function<CassandraRow, String>() {
                @Override
                public String call(CassandraRow cassandraRow) throws Exception {
                    String tempResult = cassandraRow.toString();
                    System.out.println(tempResult);
                    return tempResult;
                    }
                }
            );
        System.out.println("Data as CassandraRows: \n" + 
        cassandraRowsRDD.collect().size()); // THIS IS A LINE WITH ERROR
    } 
}

这是我的错误:

16/10/05 20:49:18信息Cassandraconnector:连接到Cassandra 群集:线程"主"中的测试群集异常 java.lang.noclassdeffounderror:org/apache/spark/sql/dataset at java.lang.class.getDeclaredMethods0(本机方法) java.lang.class.privategategategatedeclaredmethods(未知来源) java.lang.class.getDeclaredMethod(未知来源) Java.io.ObjectStreamClass.getPrivateMethod(未知来源) Java.io.ObjectStreamClass.Access $ 1700(未知来源) java.io.ObjectStreamClass $ 2.run(未知来源) java.io.ObjectStreamClass $ 2.run(未知来源) java.security.accesscontroller.doprivileged(本机方法) Java.io.ObjectStreamClass.(未知来源) Java.io.ObjectStreamClass.lookup(未知来源) java.io.io.objectOutputstream.writeObject0(未知来源)at java.io.io.objectOutputstream.defaultWriteFields(未知来源) java.io.io.objectOutputstream.writeserialldata(未知来源) java.io.io.objectOutputstream.writeordinaryObject(未知来源)at java.io.io.objectOutputstream.writeObject0(未知来源)at java.io.io.objectOutputstream.defaultWriteFields(未知来源) java.io.io.objectOutputstream.writeserialldata(未知来源) java.io.io.objectOutputstream.writeordinaryObject(未知来源)at java.io.io.objectOutputstream.writeObject0(未知来源)at java.io.io.objectOutputstream.writeObject(未知来源) scala.collection.mmutable.$ colon $ colon.WriteObject(List.Scala:379) 在sun.reflect.nativemethodaccessorimpl.invoke0(天然方法) sun.reflect.nativemethodaccessorimpl.invoke(未知来源) sun.reflect.delegatingmethodaccessorimpl.invoke(未知来源) java.lang.reflect.method.invoke(未知来源) Java.io.ObjectStreamClass.invokeWriteObject(未知来源) java.io.io.objectOutputstream.writeserialldata(未知来源) java.io.io.objectOutputstream.writeordinaryObject(未知来源)at java.io.io.objectOutputstream.writeObject0(未知来源)at java.io.io.objectOutputstream.defaultWriteFields(未知来源) java.io.io.objectOutputstream.writeserialldata(未知来源) java.io.io.objectOutputstream.writeordinaryObject(未知来源)at java.io.io.objectOutputstream.writeObject0(未知来源)at java.io.io.objectOutputstream.defaultWriteFields(未知来源) java.io.io.objectOutputstream.writeserialldata(未知来源) java.io.io.objectOutputstream.writeordinaryObject(未知来源)at java.io.io.objectOutputstream.writeObject0(未知来源)at java.io.io.objectOutputstream.defaultWriteFields(未知来源) java.io.io.objectOutputstream.writeserialldata(未知来源) java.io.io.objectOutputstream.writeordinaryObject(未知来源)at java.io.io.objectOutputstream.writeObject0(未知来源)at java.io.io.objectOutputstream.writeObject(未知来源) org.apache.spark.serializer.javaserializationstream.writeObject(javaserializer.scala:43) 在 org.apache.spark.serializer.JavaserializerInstance.Serialize(Javaserializer.Scala:100) 在 org.apache.spark.util.closurecleaner $ .Suserializable(ClosureCleaner.Scala:295) 在 org.apache.spark.util.closurecleaner $ .org $ .org $ apache $ spark $ util $ clotrecleaner $$清洁 在 org.apache.spark.util.closurecleaner $ .clean(Closurecleaner.scala:108) atorg.apache.spark.sparkcontext.clean(SparkContext.Scala:2037)at org.apache.spark.sparkcontext.runjob(SparkContext.Scala:1896)at org.apache.spark.sparkcontext.runjob(SparkContext.Scala:1911)at org.apache.spark.rdd.rdd $$ anonfun $收集$ 1.Apply(rdd.scala:893)at org.apache.spark.rdd.rddoperationscope $ .withScope(rddoperationscope.scala:151) 在 org.apache.spark.rdd.rddoperationscope $ .withScope(rddoperationscope.scala:112) atorg.apache.spark.rdd.rdd.withscope(rdd.scala:358)at org.apache.spark.rdd.rdd.collect(rdd.scala:892)at org.apache.spark.api.java.javarddlike $ class.collect(javarddlike.scala:360) 在 org.apache.spark.api.java.abstractjavarddlike.collect(javarddlike.scala:45) 在com.chatsparkconnactiontest.javademo.main(javademo.java:37) 作者:java.lang.classnotfoundexception:org.apache.spark.sql.dataset at java.net.urlclassloader.findclass(未知来源) java.lang.classloader.loadclass(未知来源)at sun.misc.launcher $ appclassloader.loadclass(未知来源) java.lang.classloader.loadclass(未知来源)... 58

我已更新了我的pom.xml,但这并未解决该错误.有人可以帮助我解决这个问题吗?

谢谢!

更新1: 这是我的构建路径屏幕截图: 链接到我的屏幕截图

推荐答案

您正在获得" java.lang.noclassdeffounderror:org/apache/spark/spark/sql/dataset"错误,因为pom.xml文件中缺少" spark-sql"依赖关系.

如果您想阅读带有Spark 2.0.0的Cassandra表,则需要低于最低依赖项.

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.11</artifactId>
    <version>2.0.0</version>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-sql_2.11</artifactId>
    <version>2.0.0</version>
</dependency>
<dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector_2.11</artifactId>
    <version>2.0.0-M3</version>
</dependency>

Spark 2.0.0提供SparkSession和DataSet API.以下是用于读取Cassandra表和打印记录的示例程序.

 public class SparkCassandraDatasetApplication {
 public static void main(String[] args) {
 SparkSession spark = SparkSession
          .builder()
          .appName("SparkCassandraDatasetApplication")
          .config("spark.sql.warehouse.dir", "/file:C:/temp")
          .config("spark.cassandra.connection.host", "127.0.0.1")
          .config("spark.cassandra.connection.port", "9042")
          .master("local[2]")
          .getOrCreate();

 //Read data
 Dataset<Row> dataset = spark.read().format("org.apache.spark.sql.cassandra")
        .options(new HashMap<String, String>() {
            {
                put("keyspace", "mykeyspace");
                put("table", "mytable");
            }
        }).load();

   //Print data
   dataset.show();       
   spark.stop();
   }        
}

如果您仍然想使用RDD,则使用以下示例程序.

public class SparkCassandraRDDApplication {
public static void main(String[] args) {
    SparkConf conf = new SparkConf()
            .setAppName("SparkCassandraRDDApplication")
            .setMaster("local[2]")
            .set("spark.cassandra.connection.host", "127.0.0.1")
            .set("spark.cassandra.connection.port", "9042");

    JavaSparkContext sc = new JavaSparkContext(conf);

    //Read
    JavaRDD<UserData> resultsRDD = javaFunctions(sc).cassandraTable("mykeyspace", "mytable",CassandraJavaUtil.mapRowTo(UserData.class));

    //Print
    resultsRDD.foreach(data -> {
        System.out.println(data.id);
        System.out.println(data.username);
    });

    sc.stop();
  }
}

Javabean(用户达塔)在上面的程序中使用如下.

public class UserData implements Serializable{  
  String id;
  String username;     
  public String getId() {
      return id;
  }
  public void setId(String id) {
      this.id = id;
  }
  public String getUsername() {
     return username;
  }
  public void setUsername(String username) {
     this.username = username;
   }    
}

其他推荐答案

我认为您需要确保课程路径中存在以下资源:

cassandra-driver-core-2.1.0.jar
metrics-core-3.0.2.jar
slf4j-api-1.7.5.jar
netty-3.9.0-Final.jar
guava-16.0.1.jar

希望这对您有帮助

其他推荐答案

删除

<!-- https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-java_2.10 -->
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.6.0-M1</version>
</dependency>

您正在classPath上混合版本. Java模块包含在Spark Cassandra Connector 2.0.0中的核心模块中.因此,这只是拉到Spark 1.6参考.

本文地址:https://www.itbaoku.cn/post/978591.html