Remove network interfaces from the system via script

If you have network interfaces running that are not wished or which could block other network services (like a VPN connection), you can remove them by using ifconfig.

The currently running netwro interfaces can be displayed by executing the following command without options in a shell:

ifconfig

The following script removes automatically some of the network interfaces when executed:

for i in `ifconfig | grep "br-*" | cut -d " " -f1 | grep "br-*"`; do sudo ifconfig $i down; done

The script greps all lines where the network name starts with “br-*“. The * is a wildcard which extracts all names starting with br- and any additional characters.
The lines are split into columns, the names are extracted and shutdown via “ifconfig NAME down“.

To use this script you could store the line into a “stop-network-interfaces.sh” file and store to you bin directory.

Docker remove old images and containers

By the time, old images and containers are still on the system and use a lot of disk space.

Show old containers and images

See the docker images with their ID:

docker images -a

See the stopped docker containers with their ID:

docker ps -a | grep Exit
or
docker ps -a -f status=exited

Remove single containers and images

Remove a container:

docker rm CONTAINERID

Remove an image:

docker rmi CONTAINERID

Force to remove an image if it is still referenced:

docker rmi -f CONTAINERID

A list of additional command options and other useful commands can be found here: https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes

science of colours – colour rules

Why are colours so important?

If we make decisions that include colours, these are influenced by the occuring colours. Like the colour of a product or whether I should click the blue or the red button. Therefore, colours are important and should be considered in future decisions.

  • colours are perception
  • Decisions are influenced by colours
  • People decide within 90 seconds and 90% of the decisions are influenced by colours

RGB

The 3 basic colours of the light are red (R), green (G) and blue. The RGB system. If you mix all the colours of the light, you get white.

  • known as additive colour mixture
  • mostly used on screens

CMYK

The basic colours are cyan (C), magenta (M), yellow (Y) and black (B) and this system is known as subtractive colour mixture. Mostly important for all printed media like books and magazines.

chromatic circle

The chromatic circle is used to define colour harmonies, colour mixtures and colour palettes.

  • 3 Primary colours: red, yellow, blue
  • 3 Secondary colours (mixture of primary colours): green, orange, purple
  • 6 tertiary colours (colours mixed from primary and secondary colours): like blue-green, red-purple

Moreover, the chromatic circle can be divided into warm and cold colours.

The association of the colours is often as follows:

  • warn colours: energy, activity and intensity or brightness
  • cold colours: calm, peace and clearness or clarity

colour schemata

Designers use colour schemata to define colours for specific advertisement materials.

  • Complementary colours: Colors that are opposite each other in the color circle; strong conrast between the colours; can make images stand out
  • Analogue colours: Colours lie side by side in the colour circle (e.g. red, yellow, orange); one colour dominating, one supporting and one accentuating; pleasant for the eye
  • Triadic colours: Colors are evenly distributed in the color circle; bright and dynamic; creates visual contrast and harmony at the same time

Complementary colours

Analogue colours

Triadic colours

Helpful resources

Transform your Play Application into a Progressive Web App (PWA)

If you want to transform your Play Application (https://www.playframework.com) into a Progressive Web App, you can do this by the following steps.

The following example is implemented by using Play 2.6.15.

Why choosing a PWA?

PWA combine the best of both worlds: A responsive web application and native apps. The advantages are:

  • Works for every user and the visualisation is adapted to the choosen browser.
  • Responsive display of the content.
  • The possibility of service workers make the application also usable in offline mode.
  • It feels like an app and can be added to the home screen of the user device. No app store needed.
  • Served via HTTPS. Required to prevent snooping.
  • SEO dream. Can be indexed and crawled by the search engines.

The transformation of your application into a PWA requires the following steps.

Create a manifest.json file

The manifest.json file contains all relevant information of your PWA that makes it recognizable. It should be stored into the: public folder.

{
  "dir": "ltr",
  "scope": "/",
  "name": "My Application As PWA",
  "short_name": "MAP",
  "icons": [{
    "src": "assets/images/icons/icon-128x128.png",
      "sizes": "128x128",
      "type": "image/png"
    }, {
      "src": "assets/images/icons/icon-144x144.png",
      "sizes": "144x144",
      "type": "image/png"
    }, {
      "src": "assets/images/icons/icon-152x152.png",
      "sizes": "152x152",
      "type": "image/png"
    }, {
      "src": "assets/images/icons/icon-192x192.png",
      "sizes": "192x192",
      "type": "image/png"
    }, {
      "src": "assets/images/icons/icon-256x256.png",
      "sizes": "256x256",
      "type": "image/png"
    }, {
    "src": "assets/images/icons/icon-512x512.png",
    "sizes": "512x512",
    "type": "image/png"
  }],
  "start_url": "/",
  "display": "standalone",
  "background_color": "transparent",
  "theme_color": "transparent",
  "description": "This is a short description.",
  "orientation": "any",
  "related_applications": [],
  "prefer_related_applications": false
}

You must at least specify the name, short_name, icons. Additionally, the information about the colors background_color, theme_color and the scope are very helpful. The scope defines the files the service worker controls or from which path the service worker can intercept requests.

The specified icons must be stored to the public/images/icons folder!

The manifest.json must be integrated within the header of your pages. (At least it should be integrated no the entry page, but it is helpful to integrate the manifest.json on all pages).

Within the Template, put the link into the header.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>My application page</title>
    <link rel="manifest" href="/manifest.json">
  </head>
  <body>
    <h1 class="vertical-container">My application</h1>
  </body>
</html>

Register service worker

A service worker can be used to cache the files and make the app accessible without having an online connection. Moreover, it can provide additional funtionalities to update content and is necessary to make the application available as real PWA (with prompt to install the application to the homescreen).

First we create a simple sw.js file that provides the functionality to register the service worker to the browser. This file must be stored within public folder.

//Add this below content to your HTML page, or add the js file to your page at the very top to register service worker
if (navigator.serviceWorker.controller) {
  console.log('[PWA Info] active service worker found, no need to register')
} else {
  //Register the ServiceWorker
  navigator.serviceWorker.register('service-worker.js', {
    scope: '/'
  }).then(function(reg) {
    console.log('Service worker has been registered for scope:'+ reg.scope);
  });

Secondly, we define a basic service worker functionality that stores an offline page during startup of the page and displays this page if the app is offline. Store the following code into service-worker.js into the public folder.

// Installation: Store offline page
self.addEventListener('install', function(event) {
  var offlineSite = new Request('my-offline-page.html');
  event.waitUntil(
    fetch(offlineSite).then(function(response) {
      return caches.open('mypwa-offline').then(function(cache) {
        console.log('[PWA Info] Cached offline page');
        return cache.put(offlineSite, response);
      });
    }));
});

// Serve offline page if the fetch fails
self.addEventListener('fetch', function(event) {
  event.respondWith(
    fetch(event.request).catch(function(error) {
        console.error( '[PWA Info] App offline. Serving stored offline page: ' + error );
        return caches.open('mypwa-offline').then(function(cache) {
          return cache.match('my-offline-page.html');
        });
      }
    ));
});

// Event to update the offline page
self.addEventListener('refreshOffline', function(response) {
  return caches.open('mypwa-offline').then(function(cache) {
    console.log('[PWA Info] Offline page updated');
    return cache.put(offlineSite, response);
  });
});

Add the sw.js to the end of the main template file.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>My application page</title>
    <link rel="manifest" href="/manifest.json">
  </head>
  <body>
    <h1 class="vertical-container">My application</h1>

    <script type="text/javascript" src="/sw.js"></script>
  </body>
</html>

Add to routes

The manifest.json, the sw.js and the service-worker.js must be accessible from the root folder of the deployed application. Therefore, it should be added to the routes file and made accessible from root path.

GET  /service-worker.js controllers.Assets.at(path="/public", file="service-worker.js")
GET  /manifest.json     controllers.Assets.at(path="/public", file="manifest.json")
GET  /sw.js             controllers.Assets.at(path="/public", file="sw.js")
GET  /offline.html      controllers.HomeController.offline

# Map static resources from the /public folder to the /assets URL path
GET  /assets/*file      controllers.Assets.versioned(path="/public", file: Asset)
->   /webjars           webjars.Routes

Assets specification within application.conf

The specification for the Assets folder must be integrated into the application.conf file.

# The asset configuration
# ~~~~~
play.assets {
  path = "/public"
  urlPrefix = "/assets"
}

Additionally, the application can throw errors within the development console of the browsers if the play.filters.headers are not specified correctly. You must try the configuration depending on your application. An example configuration is the following.

play.filters.headers {
  contentSecurityPolicy = "default-src 'self' https://cdn.jsdelivr.net;"
  contentSecurityPolicy = ${play.filters.headers.contentSecurityPolicy}" img-src 'self' 'unsafe-inline' data:;"
  contentSecurityPolicy = ${play.filters.headers.contentSecurityPolicy}" style-src 'self' 'unsafe-inline' cdnjs.cloudflare.com maxcdn.bootstrapcdn.com fonts.googleapis.com;"
  contentSecurityPolicy = ${play.filters.headers.contentSecurityPolicy}" font-src 'self' fonts.gstatic.com fonts.googleapis.com cdnjs.cloudflare.com;"
  contentSecurityPolicy = ${play.filters.headers.contentSecurityPolicy}" script-src 'self' 'unsafe-inline' ws: wss: cdnjs.cloudflare.com;"
  contentSecurityPolicy = ${play.filters.headers.contentSecurityPolicy}" connect-src 'self' 'unsafe-inline' ws: wss:;"
}

Homecontroller action for Offline page

To serve the Offline page and adapt some content depending on your needs, you can create a simple Action within a Controller that returns the template for the page.

package controllers

import javax.inject._
import play.api.mvc._
import play.api.i18n.I18nSupport

@Singleton
class HomeController @Inject()
  (cc: ControllerComponents,
   implicit val webJarsUtil: org.webjars.play.WebJarsUtil,
   implicit val assets: AssetsFinder)
    extends AbstractController(cc)
    with I18nSupport {

  def offline() = Action { 
    implicit request: Request[AnyContent] =>
      Ok(views.html.offline())
  }
}

Additional integration of Safari and iOS browsers

Moreover, some other browsers need additional meta information to the header of the main template. Therefore, the following part is necessary.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>My application page</title>
    <link rel="manifest" href="/manifest.json">
    <!-- Add to home screen for Safari on iOS -->
    <meta name="apple-mobile-web-app-capable" content="yes">
    <meta name="apple-mobile-web-app-status-bar-style" content="black">
    <meta name="apple-mobile-web-app-title" content="HuST PWA">
    <link rel="apple-touch-icon" href="@assets.path("images/icons/icon-152x152.png")">
  </head>
  <body>
    <h1 class="vertical-container">My application</h1>

    <script type="text/javascript" src="/sw.js"></script>
  </body>
</html>

Test the application

You can add the Lighthouse Plugin to your Chrome browser that test the application a displays errors that would prevent the application to be served as PWA.

If you want to test the prompt that is shown on the devices (telephone) to install the application to the homescreen, the application must be served with HTTPS. You can test this locally by running ngrok. Ngrok is a tool that creates a tunnel to the local system and provides a HTTPS URL that can be used to call the application from the phone.

Create GIT repository archive

Sometimes it is necessary to create an archive of the files of a GIT repository, e.g. if you want to sent the files to another person. Therefore you can use the git archive command which creates an archive of the files of the named tree.

Additionally, you can specify the type of archive, in our coming example a tar.gz file extracted from the HEAD tree and stored to the filepath in the end of the shell command.

Enter the GIT repository folder and execute the following command:

git archive --format=tar.gz HEAD > ~/tmp/my-git-archive.tar.gz

Git command to display the formerly changed files with assitional status information

If you need to see the formerly changed files of a push, you can use the following commands to display specific information with or without a diff.

Display the last changes with a diff of the single files

git log -p

Display the last changes without diff but status information of the changed files

git log --name-status

Display the last changes without diff but additional status information of the changed files

git log --stat

Display the last changes without diff but status information for changed lines of the files

git log --numstat

Cassandra source to retrieve data for specific data types with Apache Flink

The default CassandraSink of Apache Flink is used to store data to Cassandra. If you want to retrieve data from Cassandra, you need another implementation that allows the access of the Cassandra cluster and the mapping of the retrieved data.

One solution would be the usage of the CassandraInputFormat that specifies the datatype retrieved as Tuple. The problem with this approach is the limitation to a maximum of 25 entries for the Apache Flink Tuple.

If you have more columns and additional UDT specifications within your data type, you have to implement your own data type that is used to map the retrieved data from Cassandra.

Definition of own Cassandra output format

We have to specify an own Java class for the correct mapping of the data. This class can also be used from within Scala. A possible data type could be as follows:

import com.datastax.driver.mapping.annotations.*;

import java.io.Serializable;
import java.util.UUID;

@Table(name = "mydata", keyspace = "TestImport")
final public class MyDataRow implements Serializable {

  @Column(name = "uuid")
  @PartitionKey
  private UUID uuid = UUID.randomUUID();

  @Column(name = "name")
  private String name = null;

  @Column(name = "hobby")
  @ClusteringColumn(0)
  private String hobby = null;

  @Column(name = "hobby_data")
  @Frozen
  private HobbyMetaDataT hobbyData = new HobbyMetaDataT();

  // Constructors

  // Getter and Setter

The code above defines a row of the data model that is stored as wide row within Cassandra. The Clustering Key is relevant to store the hobbyData depending on the hobby as wide row.

The definiton of the UDT is as follows.

import com.datastax.driver.mapping.annotations.Field;
import com.datastax.driver.mapping.annotations.UDT;

@UDT(name = "hobbyT", keyspace = "TestImport")
final public class HobbyMetaDataT {

  @Field(name = "desc")
  private String description = "";

  @Field(name = "prio")
  private java.lang.Integer priority = null;

  // Constructors

  // Getter and Setter
}

The hobbyT name is the name of the UDT type in the Keyspace of Cassandra.

Implement the InputFormat that wraps the data model

Next step is the implementation of the InputFormat that wraps our previous data model.

import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.Row;
import com.datastax.driver.core.Session;
import org.apache.flink.api.common.io.DefaultInputSplitAssigner;
import org.apache.flink.api.common.io.NonParallelInput;
import org.apache.flink.api.common.io.RichInputFormat;
import org.apache.flink.api.common.io.statistics.BaseStatistics;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.core.io.GenericInputSplit;
import org.apache.flink.core.io.InputSplit;
import org.apache.flink.core.io.InputSplitAssigner;
import org.apache.flink.streaming.connectors.cassandra.ClusterBuilder;
import org.apache.flink.util.Preconditions;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import scala.collection.JavaConverters;
import scala.collection.Seq;

import java.io.IOException;
import java.util.Arrays;
import java.util.List;

public class CassandraOutFormat<OUT extends MyDataRow> extends RichInputFormat<MyDataRow, InputSplit> implements NonParallelInput {

  /**
   * The list of columns that will be mapped by [[MyDataRowOps.set]]
   */
  private final List<String> myDataRowColumns = Arrays.asList(
    "uuid", "name", "hobby", "hobbyData"
  );

  private static final Logger LOG = LoggerFactory.getLogger(CassandraOutFormat.class);

  private final String query;
  private final ClusterBuilder builder;

  private transient Cluster cluster;
  private transient Session session;
  private transient ResultSet resultSet;

  public CassandraOutFormat(String query, ClusterBuilder builder) {
    Preconditions.checkArgument(!query.isEmpty(), "Query cannot be null or empty");
    Preconditions.checkArgument(builder != null, "Builder cannot be null");

      this.query = query;
      this.builder = builder;
  }

  @Override
  public void configure(Configuration parameters) {
    this.cluster = builder.getCluster();
  }

  @Override
  public BaseStatistics getStatistics(BaseStatistics cachedStatistics) throws IOException {
    return cachedStatistics;
  }

  /**
   * Opens a Session and executes the query.
   *
   * @param ignored
   * @throws IOException
   */
  @Override
  public void open(InputSplit ignored) throws IOException {
    this.session = cluster.connect();
    this.resultSet = session.execute(query);
  }

  @Override
  public boolean reachedEnd() throws IOException {
    return resultSet.isExhausted();
  }

  @Override
  public MyDataRow nextRecord(MyDataRow reuse) throws IOException {
    final Row item = resultSet.one();
    Seq s = JavaConverters.asScalaIteratorConverter(myDataRowColumns.iterator()).asScala().toSeq();
    return FlinkRowOps$.MODULE$.toMyDataRowFromDatastax(s.toList(), item);
  }

  @Override
  public InputSplit[] createInputSplits(int minNumSplits) throws IOException {
    GenericInputSplit[] split = {new GenericInputSplit(0, 1)};
    return split;
  }

  @Override
  public InputSplitAssigner getInputSplitAssigner(InputSplit[] inputSplits) {
    return new DefaultInputSplitAssigner(inputSplits);
  }

  /**
   * Closes all resources used.
   */
  @Override
  public void close() throws IOException {
    try {
      if (session != null) {
        session.close();
      }
    } catch (Exception e) {
      LOG.error("Error while closing session.", e);
    }

    try {
      if (cluster != null) {
        cluster.close();
      }
    } catch (Exception e) {
      LOG.error("Error while closing cluster.", e);
    }
  }
}

The important and relevant steps are:

  • The OUT is necessary to have the correct mapping within Apache Flink Streams.
  • The RichInputFormat needs the specific class MyDataRow in its specification.

Helper classes and methods

Conversion of the retrieved Row to the internal data model

import your.path.to.MyDataRowOps.syntax._
import org.apache.flink.types.Row

object FlinkRowOps {
  /**
    * Parse the given flink row and create a MyDataRow row from the values
    * using the given column names and ordering.
    *
    * @param cs An ordered list of columns which must map to the order in the flink `Row`.
    * @param r  A Datastax row.
    * @return A table row of the mydatarow table.
    */
  @throws[IndexOutOfBoundsException](
    cause = "The given row did not contain a requested column (index)."
  )
  def toMyDataRowFromDatastax(
      cs: Seq[String]
  )(r: com.datastax.driver.core.Row): MyDataRow = {
    val blankRow = new MyDataRow()
    cs.zipWithIndex.foreach { t =>
      val (name, idx) = t
      val value       = r.getObject(idx)
      blankRow.set(name)(value)
    }
    blankRow
  }
}

Helper object that provides functionality to set the single fields of the internal data model.

import java.util.UUID
import scala.util.Try

trait MyDataRowOps {
  def set(r: MyDataRow)(n: String)(v: Any): Unit
}

object MyDataRowOps {
  /**
   * Convert the Datastax UDT of HobbyMetaDataT to the internal datatype.
   *
   * @param v Datastax UDT of HobbyMetaDataT
   * @return HobbyMetaDataT
   */
  def createHobbyMetaDataT(v: UDTValue): HobbyMetaDataT = {
    val desc = v.get("desc", classOf[String])
    val prio = v.get("prio", classOf[java.lang.Integer])
    new HobbyMetaDataT(desc, prio)
  }

  @throws[MatchError](cause = "The provided attribute name does not map to an attribute!")
  def setAttribute(r: MyDataRow)(n: String)(v: Any): Unit =
    n match {
      case "uuid" =>
        r.setUUID(Try(v.asInstanceOf[UUID]).toOption.orNull)
      case "name" =>
        r.setName(Try(v.asInstanceOf[String]).toOption.orNull)
      case "hobby" =>
        r.setHobby(Try(v.asInstanceOf[String]).toOption.orNull)
      case "hobbyData" =>
        Try(v.asInstanceOf[UDTValue]).toOption.fold(r.setHobbyData(null)) { u =>
          r.setHobbyData(Try(createHobbyMetaDataT(u)).toOption.orNull)
        }
    }

  /**
   * Implementation of the type class.
   */
  implicit object MyDataRowOpsImpl extends MyDataRowOps {
    override def set(r: MyDataRow)(n: String)(v: Any): Unit =
      setAttribute(r)(n)(v)
  }

  /**
   * Provide syntactic sugar for working on the table rows.
   */
  object syntax {
    /**
     * Concrete implementation to provide syntactic sugar on MyDataRow rows.
     *
     * @param r An instance of a table row.
     */
    implicit final class WrapMyDataRowOps(val r: MyDataRow) extends AnyVal {
      def set(name: String)(value: Any)(implicit ev: MyDataRowOps): Unit =
        ev.set(r)(name)(value)
    }
  }
}

Execute a stream query against Cassandra with own data type

The following example executes a query to retrieve data from Cassandra.

object Test {
  def main(args: Array[String]): Unit = {
    val senv = StreamExecutionEnvironment.getExecutionEnvironment

    val source = senv.createInput[MyDataRow](
      new CassandraOutFormat[MyDataRow](
        "SELECT uuid,name,hobby,hobby_data FROM TestImport.mydata WHERE uuid = x;",
        new ClusterBuilder() {
          override def buildCluster(builder: Cluster.Builder): Cluster =
            builder.addContactPoint("127.0.0.1").build() // local test
        }
      )
    )

    val result = source
      .setParallelism(1)
    val w = result.writeAsText("/tmp/data")
    val _ = senv.execute()
  }
}

It is important to mention that each query should specify the correct order of the columns of the specified data model.

Apache Flink InvalidTypesException because of type erasure problem

If you implement an own InputFormat to access a Cassandra database from Apache Flink, you could probably extend the RichInputFormat. The following example defines the entry point for the specification of an own InputFormat that maps the specified MyClass to the data retrieved from the database:

import com.datastax.driver.mapping.annotations.ClusteringColumn;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
import java.io.Serializable;
import java.util.UUID;

@Table(name = "mydata", keyspace = "TestImport")
final public class MyClass implements Serializable {
  @Column(name = "uuid")
  @PartitionKey
  private UUID uuid = UUID.randomUUID();

  @Column(name = "name")
  private String name = null;
}

import org.apache.flink.api.common.io.NonParallelInput;
import org.apache.flink.api.common.io.RichInputFormat;
import org.apache.flink.core.io.InputSplit;

public class CassandraOutFormat<OUT extends MyClass> 
  extends RichInputFormat<MyClass, InputSplit> 
  implements NonParallelInput {

  // Implement the requested methods
}

The important and relevant steps are:

  • Specify the MyClass as Table via the annotations.
  • The OUT is necessary to have the correct mapping within Apache Flink Streams.
  • The RichInputFormat needs the specific class MyClass in its specification.

If thesen preconditions are not fulfilled, the following definition and execution of a query against Cassandra would throw the following exception:

error] Exception in thread "main" org.apache.flink.api.common.functions.InvalidTypesException: Type of TypeVariable 'OT' in 'class org.apache.flink.api.common.io.RichInputFormat' could not be determined. This is most likely a type erasure problem. The type extraction currently supports types with generic variables only in cases where all variables in the return type can be deduced from the input type(s).

An example query definition and execution could be as follows:

object Test {
  def main(args: Array[String]): Unit = {
    val senv = StreamExecutionEnvironment.getExecutionEnvironment

    val source = senv.createInput[MyClass](
      new CassandraOutFormat[MyClass](
        "SELECT uuid, name FROM TestImport.mydata WHERE uuid = x;",
        new ClusterBuilder() {
          override def buildCluster(builder: Cluster.Builder): Cluster =
            builder.addContactPoint("127.0.0.1").build() // local test
        }
      )
    )

    val result = source
      .setParallelism(1)
    val w = result.writeAsText("/tmp/data")
    val _ = senv.execute()
  }
}

Basic Cassandra commands

The following commands and shell options are used with Cassandra 2.2 and 3.x.

Increase Request Timeout for local database connection

The following option increases the request timeout for the current session.

./cassandra/bin/cqlsh --request-timeout=3600

Clear all former snapshots

./cassandra/bin/nodetool clearsnapshot

Increase Heap size for local session (mostly development)

MAX_HEAP_SIZE=8g HEAP_NEWSIZE=2g ./cassandra/bin/cassandra -f

Starting and connecting local cassandra instance

# Startup
./cassandra/bin/cassandra -f

# Connect
./bin/cqlsh localhost

Show all Keysapces

DESCRIBE KEYSPACES;

Show all tables of a Keyspace

DESCRIBE KEYSPACE keyspaceName;

Set a Keyspace as current working space

USE keyspaceName;

Exchange data between paragraphs of Spark and Flink interpreters with InterpreterContext within Apache Zeppelin

If you have to exchange data from Flink to Spark or Spark to Flink within Apache Zeppelin, you can use the InterpreterContext to store and reload data between the separated paragraphs.

You can load the InterpreterContext within Spark paragraph and store the relevant data within:

%spark

import org.apache.zeppelin.interpreter.InterpreterContext
 
val resourcePool = InterpreterContext.get().getResourcePool()
 
val n = z.select("name",Seq(("foo", "foo"), ("bar", "bar")))
 
resourcePool.put("name", n)

Within another paragraph that loads the Flink interpreter, you can load the InterpreterContext and use the stored information.

%flink
 
import org.apache.zeppelin.interpreter.InterpreterContext
 
val resourcePool = InterpreterContext.get().getResourcePool()
 
resourcePool.get("name").get.toString