InternalCompilerException: Compiling Class was loaded through a different loader


I am using Scala interpreter to evaluate Scala statements, coming from configuration.

Sample code is:
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, SparkSession}

object BSFTest {
  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf()
      .setMaster("local") // spark://

    val sparkSession = SparkSession.builder()

    import sparkSession.sql

    sql(s"CREATE DATABASE test")

    sql ("CREATE TABLE test.box_width (id INT, width INT)")
    sql ("INSERT INTO test.box_width VALUES (1,1), (2,2)")

    sql ("CREATE TABLE test.box_length (id INT, length INT)")
    sql ("INSERT INTO test.box_length VALUES (1,10), (2,20)")

    val widthDF:DataFrame = sql("select *  from  test.box_width")
    val lengthDF = sql("select *  from  test.box_length")

    val settings = new Settings
    settings.usejavacp.value = true
    settings.deprecation.value = true

    val eval = new IMain(settings)
    eval.bind("lengthDF", "org.apache.spark.sql.DataFrame", lengthDF)
    eval.bind("widthDF", "org.apache.spark.sql.DataFrame", widthDF)
    val clazz1 = "lengthDF.join(widthDF, \"id\")" //STATEMENT FROM CONFIGURATION
    val evaluated = eval.interpret(clazz1)
    val res = eval.valueOfTerm("res0").get.asInstanceOf[DataFrame]
    println("PRINT SCHEMA: " + res.schema) //This statement is running fine //EXCEPTION HERE

I am getting following error when executing code:

lengthDF: org.apache.spark.sql.DataFrame = [id: int, length: int]
widthDF: org.apache.spark.sql.DataFrame = [id: int, width: int]
res0: org.apache.spark.sql.DataFrame = [id: int, length: int ... 1 more field]
PRINT SCHEMA: StructType(StructField(id,IntegerType,true), StructField(length,IntegerType,true), StructField(width,IntegerType,true))
18/10/24 19:35:32 ERROR CodeGenerator: failed to compile: org.codehaus.janino.InternalCompilerException: Compiling "GeneratedClass": Class 'org.apache.spark.sql.catalyst.expressions.codegen.GeneratedClass' was loaded through a different loader
/* 001 */ public java.lang.Object generate(Object[] references) {
/* 002 */   return new SpecificSafeProjection(references);
/* 003 */ }
/* 004 */
/* 005 */ class SpecificSafeProjection extends org.apache.spark.sql.catalyst.expressions.codegen.BaseProjection {
org.codehaus.janino.InternalCompilerException: Compiling "GeneratedClass": Class 'org.apache.spark.sql.catalyst.expressions.codegen.GeneratedClass' was loaded through a different loader
	at org.codehaus.janino.UnitCompiler.compileUnit(
	at org.codehaus.janino.SimpleCompiler.cook(
	at org.codehaus.janino.SimpleCompiler.compileToClassLoader(
Caused by: org.codehaus.janino.InternalCompilerException: Class 'org.apache.spark.sql.catalyst.expressions.codegen.GeneratedClass' was loaded through a different loader
	at org.codehaus.janino.SimpleCompiler$2.getDelegate(
	at org.codehaus.janino.SimpleCompiler$2.accept(
	at org.codehaus.janino.UnitCompiler.getType(


I am not able to understand that even when res.schema (fetching schema from DataFrame) is running as expected, (retrieve data and print from DataFrame) is throwing exception.

scalaVersion := “2.11.11”
libraryDependencies += “org.apache.spark” %% “spark-core” % “2.2.2”
libraryDependencies += “org.apache.spark” %% “spark-sql” % “2.2.2”
libraryDependencies += “org.apache.spark” %% “spark-hive” % “2.2.2”

What can I do to solve the issue?