Fedora 30 + netty-tcnative

I had started following Armeria project and finally decided it’s to play with it today. Browsed their documentation and example project. Looks good!

And then, I wanted to edit their example code to understand their APIs and how to use it. So I started with the obvious change – use HTTPS server with a self-signed certificate! It was 2 line change. Sweet!

Start server. In the logs noticed below message. Strange. That should not be the case, as project dependencies are setup appropriately. (Asked this in their Slack channel)

[main] INFO com.linecorp.armeria.common.Flags - OpenSSL not available: java.lang.IllegalArgumentException: Failed to load any of the given libraries: [netty_tcnative_linux_x86_64, netty_tcnative_linux_x86_64_fedora, netty_tcnative_x86_64, netty_tcnative]

I thought, its INFO. How bad can it be? Let’s try some cURL command. Very bad! All requests failed due to TLSv1.3 error ??!.

[armeria-common-worker-epoll-2-1] WARN com.linecorp.armeria.server.HttpServerPipelineConfigurator - [id: 0xc833df45, L:/127.0.0.1:8080 - R:/127.0.0.1:46360] Unexpected exception:

io.netty.handler.codec.DecoderException: java.lang.IllegalArgumentException: TLSv1.3

Started digging through this. I was running using JDK 8, and it does not support TLSv1.3. Could that be the reason? Why?

As Trustin suggested, enabled Netty’s debug log to understand what happened? And then it all started to make sense.

Suppressed: java.lang.UnsatisfiedLinkError: /tmp/libnetty_tcnative_linux_x86_642188735639722784112.so: libcrypt.so.1: cannot open shared object file: No such file or directory

libcrypto.so.1 is missing in Fedora 30. Some more googling, provided more context (and knowledge to me). Fedora 30 ships with libcrypt version 2 & netty-tcnative is looking for libcrypt version 1. Use below commands to install libcrypt version 1 for my OS, and things were happy again!

$ dnf install libxcrypt-compat // installs libxcrypt-compat-4.4.6-2.fc30.x86_64

Armeria can find netty-tcnative and no more TLSv1.3 related errors. yay!

While I was writing this page (and doing other things), Trustin created issue#1984 to disable TLSv1.3 when not supported by SSL Engine (in this case JDKSSLEngine)

Advertisement

How to – When you have a HSBC UK Account?

 

Updated: 13th May 2018
Reference: HSBC UK Chat Agent

Pay your HSBC UK Credit Card bill?

If you have an HSBC UK Current Account, and would like to pay your HSBC Credit Card Bill, then you have to setup a new payee.

Steps below:

Choose the current account you wish to make the payment from, 
then select ‘Move money’. From there choose ‘New Payee’ followed 
by ‘Payee Lookup’.

In the 'Find a company' box you can then type in the first 
6 digits of your Credit Card number and select the option
which matches the card type you have from the drop down.

Once you've chosen your card type, you need to enter your 
full 16 digit Credit Card number into the 
'Company reference' field.

You should then enter the last four digits of the 
Credit Card number into your Secure Key when generating
the transaction code.

 

You can also setup Direct Debit to auto-pay minimum balance, a fixed amount or full statement balance.

Steps below:

From your Accounts page, click on your credit card then 
choose "Manage", then select "Credit Card Repayment Options". 
Here you can set up, amend or cancel a Direct Debit for 
the card.

 

Verified by Visa Activation Code

To register for Verified By Visa, you’ll need an activation code provided by your bank. For HSBC UK, steps below

You can set-up or reset your Verified by Visa information 
by choosing the current account or credit card, then 
select ‘Manage’. Under ‘Card Services’ select 
‘Verified by VISA’. You will receive a 6 digit code, 
which you can use on the Verified by Visa page by 
pressing ‘Continue’ below this.

 

Netty 4.1.x :: Running Notes

Version: 4.1.6.Final


  • EventLoopGroup when initializing ServerBootstrap

EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)

bossGroup (passed as parentGroup) handles the I/O for the acceptor channel, i.e. the one which is bounded to the port where your server is accepting new requests.

workerGroup (passed as childGroup) will the handling the I/O for the accepted connections.

the acceptor channel should be handled by one and only thread and it will. If you pass a EventLoopGroup with several EventLoops to the parent, it will only use one of the EventLoops anyway, so a good practice is to reuse the same EventLoopGroup for both parent and child.


  • AbstractBootstrap#handler vs. ServerBootstrap#childHandler for ServerBootstrap

val b = new ServerBootstrap()
b.group(boss, wrkr)
 .channel(classOf[NioServerSocketChannel])
 .handler(new LoggingHandler(LogLevel.INFO)) <<<<<<<<<<
 .childHandler(new ChannelInitializer[SocketChannel]() {
   override def initChannel(ch: SocketChannel): Unit =
     ch.pipeline()
       .addLast(new LoggingHandler(LogLevel.INFO)) <<<<<<<<<<<
       .addLast(new StringDecoder())
       .addLast(new StringEncoder())
 })

 

handler registers a channel handler for the parent channel
childHandler registers a channel handler for child channels

In the case of LoggingHandler, the first one logs events happened in the parent channel, which includes port binding and accepting new connections. So it produces logs (simplified and commented) like below:

// parent channel registered
INFO - [id: 0xb94a8e7c] REGISTERED
// parent channel binds to localhost:8009
INFO - [id: 0xb94a8e7c] BIND: 0.0.0.0/0.0.0.0:8009
// parent channel active
INFO - [id: 0xb94a8e7c, L:/0:0:0:0:0:0:0:0:8009] ACTIVE
// parent channel accepts new connection, child channel with id 0xe507ce8f created
INFO - [id: 0xb94a8e7c, L:/0:0:0:0:0:0:0:0:8009] RECEIVED: [id: 0xe507ce8f, L:/0:0:0:0:0:0:0:1:8009 - R:/0:0:0:0:0:0:0:1:54398]

Suppose the child channel reads the data in the request, the logger in your second will produce something like:

// child channel registered
INFO - [id: 0x15fee362, L:/0:0:0:0:0:0:0:1:8009 - R:/0:0:0:0:0:0:0:1:55459] REGISTERED
// child channel active
INFO - [id: 0x15fee362, L:/0:0:0:0:0:0:0:1:8009 - R:/0:0:0:0:0:0:0:1:55459] ACTIVE
// child channel received 7 bytes of data, “hello\r\n”
INFO - [id: 0x15fee362, L:/0:0:0:0:0:0:0:1:8009 - R:/0:0:0:0:0:0:0:1:55459] RECEIVED: 7B
// logs the hex dump of the received data

+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f 0d 0a |hello.. |
+--------+-------------------------------------------------+----------------+


References:

Docker 1.12.x – Commonly Used Commands

Idea is to list commonly used (and searched) for commands for docker 1.12.x.  Should work for latest docker ( in general).  Hopefully I will keep this page updated

  1. Remove all stopped docker containers:
    docker ps -a | grep Exit | cut -d ' ' -f 1 | xargs docker rm
  2. Remove all untagged docker images:
    docker rmi -f $(docker images | grep "" | awk "{print \$3}")
  3. Setting proxy for Docker: See https://mybrainimage.wordpress.com/2017/10/05/docker-1-13-installation-on-centos-7rhel-7/
  4. Change from docker’s default /var/lib/docker:  See https://linuxconfig.org/how-to-move-docker-s-default-var-lib-docker-to-another-directory-on-ubuntu-debian-linux

 

Docker 1.13 Installation on CentOS 7/RHEL 7

Last couple of days, I was trying to install docker-1.13.1 x86_64 in my lab VMs. I thought it was straight forward, or atleast I can google the error messages I see and solve them pretty quick. Boy, I was wrong!! May be I was typing in the wrong error messages or somehow internet search engines were determined not to show me the obvious results first.

So here it goes, on steps required to get Docker 1.13 up and running in CentOS 7 / RHEL 7 systemd environments

  1. Install docker.  I installed it via yum command. For detailed steps, see this page from docs.docker.com
  2. As root user (or using sudo), create /etc/systemd/system/docker.service with following content
    [Unit]
    Description=Example Service Script description goes here
    After=network.target
    
    [Service]
    Type=simple
    ExecStart=/usr/local/sbin/docker.sh
    TimeoutStartSec=0
    
    [Install]
    WantedBy=multi-user.target
  3. As root user , create /etc/systemd/system/docker.service.d/overlay.conf with following content (if required, created parent directories). Environment variables are not mandatory, but I have shown the option – just in case someone wants to use that.
    [Service]
    
    ExecStart=
    
    ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock
    
    Environment="no_proxy=.mycompany.com"
    
    Environment="http_proxy=http://proxy.com:8080"
    
    Environment="https_proxy=http://proxy.com:8080"
  4. As root user (or using sudo),  execute below command
    service docker start
  5. Verify docker service has started successfully using below command. If the command shows ‘Active: active(running)’, then we’re good.
    [root@:~]$ service docker status
    
    Redirecting to /bin/systemctl status docker.service
    
    ● docker.service - Example Service Script description goes here
    
     Loaded: loaded (/etc/systemd/system/docker.service; disabled; vendor preset: disabled)
    
     Drop-In: /etc/systemd/system/docker.service.d
    
     └─overlay.conf
    
     Active: active (running) since Thu 2017-10-05 13:29:59 MST; 5s ago
    
     Main PID: 13108 (dockerd-latest)

If you’d like Docker to start at boot, then issue the below command using sudo or as root user

systemctl enable docker

 

Common error messages observed post-installation

[root@:~]$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

[root@:~]$ service docker restart
Redirecting to /bin/systemctl restart docker.service
Failed to restart docker.service: Unit not found.


[root@:~]$ systemctl enable docker
Failed to execute operation: No such file or directory

Reference links:

Ubuntu and Proxy

As always, use the below information with caution. Don’t break any law/policy.

 

As a developer working within an enterprise,

I would like to configure proxy setting for my Ubuntu VM,

So that I can spend maximum time in developing software and not fight the MITM proxy setup

 

Set proxy variables for general work

If you’re a superuser (root), then go ahead and create a new file with proxy settings as below. If not, add the below content to your .bashrc

$ cat /etc/profile.d/set-proxy.sh 
export proxy=http://your.company.proxy:port
export no_proxy="localhost,127.0.0.1,yourcompany.domain.com"
export http_proxy=$proxy
export https_proxy=$proxy
export HTTP_PROXY=$proxy
export HTTPS_PROXY=$proxy
export NO_PROXY=$no_proxy

 

Set proxy for apt to work

To do the below, you have to be a superuser. Create a new file with the below content.

$ cat /etc/apt/apt.conf.d/95proxies 
Acquire::http::proxy "http://your.company.proxy:port";
Acquire::http::Pipeline-Depth 0;
Acquire::http::No-Cache true;
Acquire::BrokenProxy true;

 

Bonus: CNTLM

Use cntlm to avoid writing your username and password everywhere. Some tools/libraries cannot work/do not work well with user/password based proxy URL’s. Also, you’ll have to use appropriate URL safe escape characters if you’re password contains @, $, etc. (See this page from cyberciti.biz for more details)

Once you have cntlm executable, you can start in foreground using the below command

$ cntlm -g -v -u userid@domain -I -r "User-Agent: curl/7.44.0" -l 3232 your.company.proxy:port

Once started, cntlm will ask for your password (if you haven’t configured ini file).

3232 is the port on which cntlm will listen for http request. The above command also shows the possibility to set a pre-defined HTTP Header that will be sent to your company proxy. This is pretty useful, and dangerous when the MITM proxy might not allow certain applications to access the internet.

 

Eclipse Che – Custom Service for firewall-cmd)

Eclipse Che – pronounced as ‘chay’ – is an exciting step towards developer workspace i.e. IDE. You can read more about this on their website.

This post to talk about firewalld(firewall-cmd) changes to make eclipse/che:5.4.1 docker image working on my CentOS 7.1 host machine. From their website

Che is a platform that launches workspaces using Docker on different networks. Your browser or desktop IDE then connects to these workspaces. This makes Che a Platform as a Service (PaaS) running on a distributed network. There are essential connections we establish:

  1. Browser –> Che Server
  2. Che Server –> Docker Daemon
  3. Che Server –> Workspace
  4. Workspace –> Che Server
  5. Browser –> Workspace

 

Running command for the first time

docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock 
-v /opt/eclipse/che:/data eclipse/che start

got the following error

 mem (1.5 GiB): [OK]
 disk (100 MB): [OK]
 port 8080 (http): [AVAILABLE]
 conn (browser => ws): [NOT OK]
 conn (server => ws): [NOT OK]

Upon reading their documentation, realised eclipse/che talks to workspace Che agents on ports ranging from 32768 to 65535.

So, what to do next? Well, it’s quite simple – make changes to CentOS firewalld to allow communication on required port number(s). If you’d like to read more about firewall-cmd, see this excellent article on digitalocean.com.

Time to make some changes to firewalld rules. But instead of opening one port at a time (or worst, the complete range) – can we do it an elegant fashion?

Yes, we can. Create a custom service and modify rules for that service. Advantages of doing this:

  • easy to maintain. we can add/remove ports as required. activate/shutdown service.
  • easy to read firewall rules. more descriptive information on why a certain rule was modified.

Steps I had followed

  • Login to your machine as root (or a sudo capable user)
  • Copy services/ssh.xml as services/docker-eclipse-che.xml
cp /usr/lib/firewalld/services/ssh.xml /etc/firewalld/services/docker-eclipse-che.xml
  • Edit docker-eclipse-che.xml per your needs (following is what I have)

docker-eclipse-she
Eclipse Che is a next-generation cloud IDE and workspace server that can run anywhere Docker runs. For more information, see https://www.eclipse.org/che/docs/setup/getting-started/




  • Reload firewalld
firewall-cmd --reload
  • Verify firewalld is able to recognise docker-eclipse-che as one of the services
firewall-cmd --get-services
  • Add docker-eclipse-che service to your required zone, basically, open up mentioned ports for communication. (my ethernet is on public zone. make required changes based on your zone)
firewall-cmd --zone=public --add-service=docker-eclipse-che
  • Test eclipse/che (using eclipse/che info –network) or with the previous docker run command. Connection to Che workspace should work.

Things should start working!

Docker Change Port Mapping for an Existing Container

Have you ever been in a situation where you forgot to “expose” a port for your container, or you’d like to change the port mapping for an existing container? I know I have been!!

When you perform a quick google search, most common answer’s are

Among those answers, a saint would have mentioned – it’s possible. But you’d have to do some extra work. So here I’m, telling you – yep, it’s possible. I have tried it and it works. For steps, see the linked answer written by “holdfenytolvaj”.

Here, I’ll explain, what needs to be changed in order for you to modify port mapping. I would like to (in my case) expose port 8888 from my docker container.

In my case, I would like to expose an additional port – 8888 – from my docker container.

Step 1: Using “docker inspect” get details about current port mapping. This will be seen under “NetworkSettings”. And “PortBindings” under “HostConfig”.

"Ports": {
 "80/tcp": [ 
{
 "HostIp": "0.0.0.0",
 "HostPort": "80"
 }
]
 },

 

The above snippet (from NetworkSettings.Port) declares – expose port 80 from my docker container to port 80 (on every network device) in my docker host machine.

NOTE: Stop the container and docker engine before editing the below files.

Step 2:  Edit the config.v2.json file as shown below

(a) Update entry for “ExposedPorts”

(b) Update entry for “Ports”

$ vi /var/lib/docker/containers//config.v2.json
...
{
"Config": {
....
"ExposedPorts": {
"80/tcp": {},
"8888/tcp": {}
},
....
},
"NetworkSettings": {
....
"Ports": {
 "80/tcp": [
 {
 "HostIp": "",
 "HostPort": "80"
 }
 ],
 "8888/tcp": [
 {
 "HostIp": "",
 "HostPort": "8888"
 }
 ]
 },
....
}

In the above snippet, I have included one more port – 8888 –  to be exposed as *:8888 on my host machine.

Step 3:  Edit the hostconfig.json file as shown below

(a) Update entry for “PortBindings”

$ vi /var/lib/docker/containers//hostconfig.json
{
....
 "PortBindings": {
 "80/tcp": [
 {
 "HostIp": "",
 "HostPort": "80"
 }
 ],
 "8888/tcp": [
 {
 "HostIp": "",
 "HostPort": "8888"
 }
 ]
 },
.....
}

Save the file. Re-start your docker engine (docker service via systemctl). Verify docker engine has started successfully, without any errors.

Start your container.

When you execute “docker ps” command, the PORTS column should show the updated port mapping details.

 

Learning Apache Spark – Part 1

As part of my self-learning exercise, in December 2016 I had enrolled into Simplilearn’s “Big Data Hadoop and Spark Developers” course. In this series, I will try to capture my notes and practice projects (if any)

 

What is RDD?

Spark’s documentation defines it as

Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. There are two ways to create RDDs:

  • parallelizing an existing collection in your driver program, or
  • referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat.

Create an RDD from a file in HDFS

I would like to create an RDD from a file present in HDFS. Using that RDD, I would like to perform some operations like word count, line count.

Step 1: Store the file in HDFS, if not done already.

$ hdfs dfs -put aa_wordcount.txt spark_input/aa_wordcount.txt

Step 2: From spark-shell, which is a Scala REPL, create a new RDD from a text file. As mentioned above, RDD can be created from HDFS.

Note: As part of initialization, Spark shell mentions that

  • Spark context available as sc.
  • SQL context available as sqlContext.

val rdd = sc.textFile("spark_input/aa_wordcount.txt")

 

 

To view the content of the file, call collect() on the rdd variable


rdd.collect()

res1: Array[String] = Array(a aa aaa, aaa aa a, aaa aa a aaa aa a aaa)

So what happened? What did collect() do? To explain this, I’ll have to explain “transformations” and “actions”.

RDDs support two types of operations: transformations and actions. All transformations in Apache Spark are lazy. By lazy I mean, they don’t do anything  at that moment in time – they don’t evaluate, they don’t perform any computation.

The actual computation is performed when an action is called. In the above example, collect() is an action. When an action runs, it executes all the transformation and then show the result.

Take for example, the below snippet.

  • First we create an RDD by calling textFile(). transformation action. The file has not been loaded yet
  • Then we call collect() action. This action loads the file. Parses the file by splitting based on newline character, and displays the content of the file as an array of String.
  • Now call map() transformation function. This function receives a single argument – a single line from the file. Here we return the length of the line. Note, every transformation operation results in creation of a new RDD.
  • On the last RDD, we then call reduce() action. reduce() is an aggregator action i.e. it receives two inputs, and it returns one.

val rdd = sc.textFile("spark_input/aa_wordcount.txt")

rdd: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[3] at textFile
rdd.collect()

res1: Array[String] = Array(a aa aaa, aaa aa a, aaa aa a aaa aa a aaa)

val lineLength = rdd.map(s => s.length())

lineLength: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[4] at map 

val totalLength = lineLength.reduce((a, b) => a + b)

totalLength: Int = 37

Some other examples

Example 1: Count the number of words in a file

val wordCountRDD = rdd.map(s => s.split("\\s").length)

val totalWordCount = wordCountRDD.reduce((a,b) => a + b)

Example 2: Display the first word of each line

val firstWordOfEachLineRDD = rdd.map(s => s.split("\\s")(0))

firstWordOfEachLineRDD.collect()

Example 3: Display the last word of each line

//TODO: Use Regular Expression

val lastWordOfEachLineRDD = rdd.map(s => { val tokens = s.split("\\s"); tokens(tokens.length - 1) })

lastWordOfEachLineRDD.collect()

Example 4: Remove blank line

val rdd2 = sc.textFile("spark_input/aa_wordcount_withblankline.txt")

rdd2.collect()

res5: Array[String] = Array(a aa aaa, "", aaa aa a, "", aaa aa a aaa aa a aaa, "", b, c, dddddddddddd dddd, "", z)

val removeBlankLine = rdd2.filter(s => s.length() > 0)

removeBlankLine.collect()

res6: Array[String] = Array(a aa aaa, aaa aa a, aaa aa a aaa aa a aaa, b, c, dddddddddddd dddd, z)

Other Reference Links:

Linux find with exec and xargs

Linux’s find command is a handy command. It’s not only to find something. It can be used to find something very specific and when found, some action could be performed.

For instance,  recently I had to calculate MD5 hashvalues for all the files (in this case jar files) present within a directory.

find -maxdepth 5 -type f -name “*.jar” -exec md5sum “{}” + > /var/tmp/md5sum_lib.check

The above command started searching for a file in the current directory, whose name will match the pattern “*.jar”. Once found, it will execute md5sum program and pass the file name as argument. If there are any sub-directories, then it will repeat until there are no more sub directories or the sub-directory level (nesting level) is 5 from starting directory.

All this was good. But then, I wanted the hash values for files in a sorted order. Something that Linux’s sort executable does well. By sorting, it will allow me to use a diff command to show the difference. Enter xargs. I had modified the above command as

find -maxdepth 5 -type f -name “*.jar” -print0 | sort -z | xargs -r0 md5sum > /var/tmp/md5sum_lib.check

-r option is to make sure xargs will call md5sum executable with an argument