User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of Internet Protocol suite, referred as UDP/IP suite.

UDP uses a simple connectionless communication model with a minimum of protocol mechanisms.

UDP provides checksums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram.

It has no handshaking dialogues, and thus exposes the user’s program to any unreliability of the underlying network; there is no guarantee of delivery, ordering, or duplicate protection.

Where we can use UDP Protocol ?

After sending a packet, UDP protocol does not say sender will wait for acknowledgment and will send another packet.

Use cases are

When one server in Distributed server sends time token/living life time token to another server. In this case if packet lost in between server should not wait for acknowledgment and resend packet since time is updated now so sending new packet here will be helpful.

Udp packet is small, so it can be used in very small bandwidth.

Difference between TCP and UDP ?

UDP header is 8-bytes fixed and simple header, while for TCP it may vary from 20 bytes to 60 bytes.

In TCP protocol, after a packet is send, sender will wait for acknowledgment. If acknowledgment will not come, sender has to resend the packet.

TCP protocol is more reliable and must be used if we can’t bear packet loss. UDP needs to be used if packet need to be send fast even in small bandwidth and if some packets get’s lost there must be no issue.

My Udemy Courses

Click below images/name to buy courses and get discounts

ElasticSearch Logstash Kibana

Elasticsearch Logstash Kibana and Beats tutorial with DSL Queries, Aggregator & Tokenizer.

For Basic User

1) Author Introduction and Course Description

2) Introduction of complete ELK stack and different types of beats with internals of Elasticsearch and Lucene Indexing.

3) Installation of Elastic search and Kibana on Windows server

For Advance User

4) Data Ingestion from Mysql, Oracle, Apache, Rest API, & Nginx logs using Logstash & Filebeat with live examples.

5) Kibana for data visualization and dashboard (creation,monitoring & sharing) + Metricbeat + WinlogBeat (Installation, Data Ingestion and Dashboard Management)

6) DSL, Aggregation and Tokenizer Queries


Learn Kafka – Kafka Connect – Kafka Stream with Hands-On Examples and cases studies.

Learning environment is setup with three node cluster to give production level environment for learning and growing and connect all kafka dots from CLI to Kafka Connect and Stream processing under one course keeping in mind for students from beginner level till expert.

Data Structure

This course is specially designed for Java Learners who want to program their favorite Algorithms in Java and need to learn new trick of Lambda Expression above Java Collections.

The Best part of this course is the wired Interview Problems which are first explained and then programmed using Java.

Each Algorithm is deeply explained and analyzed for it’s best use.

AWS SQS Bucket

Amazon Simple Queue Service (SQS) is a fully managed message queuing service .

SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work.

Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. Get started with SQS in minutes using the AWS console, Command Line Interface or SDK of your choice, and three simple commands.

SQS offers two types of message queues.

  • Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery.
  • SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.

Q: How is Amazon SQS different from Amazon SNS?

Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.

Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components.

Q: How is Amazon SQS different from Amazon MQ?

If you’re using messaging with existing applications, and want to move your messaging to the cloud quickly and easily, recommendation is to consider Amazon MQ. It supports industry-standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications.

If you are building brand new applications in the cloud, we recommend you consider Amazon SQS and Amazon SNS. Amazon SQS and SNS are lightweight, fully managed message queue and topic services that scale almost infinitely and provide simple, easy-to-use APIs. 

NASA and BMW is one of the user of AWS SQS and SNS 🙂

Q: How is Amazon SQS different from Amazon Kinesis Streams?

Amazon Kinesis Streams allows real-time processing of streaming big data and the ability to read and replay records to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications that read from the same Amazon Kinesis stream (for example, to perform counting, aggregation, and filtering).

Q) What is the cost model of AWS SQS ?

Ans) Check it yourself

Know Your Elasticsearch!

Q)What is ElasticSearch?

Ans) Elasticsearch is a distributed, free and analytics engine for all types of data, including textual, numerical, geospatial<geo location>, structured, and unstructured.

Elasticsearch is built on Apache Lucene and was first released in 2010 by Elasticsearch N.V. (now known as Elastic).

We can use REST APIs other than ample of tools for data ingestion, enrichment, storage, analysis, and visualization. Due to Rest API CRUD feature it is easy to integrate with coding Languages/Platforms like Java, Python or Spring Boot.

Q) Two Case study to use Elastic Search or Areas where we can use it ?

Ans) a)ELK Stack

Any application generates logs. We can monitor application with use of Logstash. It will store these logs into Elasticsaerch. To see data after insertion we can see it in Kibana. In kibana we can write queries to analyze data and analyze it in data or graphical form.

b) Searching text in Java Application

I have blog, I can insert it’s content in Elasticsearch using Java(Rest CRUD api).

Above that I can use Elasticsearch data(JPA) to search text in Elasticsearch and bring related results back as result of get api.

Q) Is Elasticsearch as a NoSQL Database ?

Ans) Yes

Q) Is Elasticsearch is build upon Lucene engine ?

Ans) Yes

Q) Terminologies of Elasticsearch ?

Ans) field, document, index and cluster .

Q) Map above Elasticsearch terminologies with RDBMS ?

Ans) Elasticsearch RDBMS

Cluster Database

index table

document row

field column

● Cluster: A cluster is a collection of one or more nodes that together holds the entire data. It provides federated indexing and search capabilities across all nodes and is identified by a unique name (by default it is ‘elasticsearch’).

● Node: A node is a single server which is a part of cluster, stores data and participates in the cluster’s indexing and search capabilities.

● Index: An index is a collection of documents with similar characteristics and is identified by a name. This name is used to refer to the index while performing indexing, search, update, and delete operations against the documents in it.

● Type: A type is a logical type of an index whose semantics is complet. It is defined for documents that have a set of common fields. you can define more than one type in your index.

● Document: A document is a basic unit of information which can be indexed. It is demonstrated in JSON which is a global internet data interchange format.

Documents also contain reserved fields that constitute the document metadata such as:

  1. _index – the index where the document resides
  2. _type – the type that the document represents
  3. _id – the unique identifier for the document

An example of a document:

   "_id": 3,
   “_type”: [“your index type”],
   “_index”: [“your index name”],
   "age": 32,
   "name": ["arun”],

● Shards: Elasticsearch provides the ability to subdivide the index into multiple pieces called shards. Each shard is in itself a fully-functional and independent “index” that can be hosted on any node within the cluster

● Replicas: Elasticsearch allows you to make one or more copies of your index’s shards which are called replica shards or replica.

Q)Why Elasticsearch is faster in searching than file search/RDBMS search?

Ans) Its all depend on how these system store data instead of retrieval.

Let me explain, If I have 1000 blogs and in three of them I have word ShRaam

Then, RDBMS/File system will go per blog/page and search for entire content in these pages and then bring three which has this matching term.

While, Elastisearch make use of inverted Index i.e. it will store words of that pages as keys of that pages.

ShRaam–> Page x, y and z

So when you search for keyword ShRaam, it will simply bring those three page where it is present instead of searching in page content at time of requirement.

Q) Name three companies using Elasticsearch?

Ans) Netflix