Aws kms encrypt/decrypt example java

What is KMS ?

KMS is a ‘Key Management System’, help you to create and manage cryptographic keys. It helps in control use of ‘cryptographic keys’ across a wide range of AWS services and in your applications.

Application can be in C#, Go, Java, Node, PHP, Python, and Ruby or you can say KMS supported languages. SDK related to all is available on AWS site.

When to Use KMS ?

KMS is to store encryption/decryption DATA keys. Further, use data keys to encrypt and decrypt, with AWS Encryption SDK.

Code Time

Gradle.xml

implementation platform('software.amazon.awssdk:bom:2.17.87')
implementation 'software.amazon.awssdk:kms'

 implementation platform('com.amazonaws:aws-java-sdk-bom:1.12.116')
 implementation 'com.amazonaws:aws-java-sdk-kms'

Java Code

@Bean
public AWSKMS kmsClient() {
   
    String apiKey = <from aws console>
    String apiSecrete = <from aws console>
    AWSCredentialsProvider credentialsProvider = null;


    AWSCredentials credentials = new BasicAWSCredentials(apiKey, apiSecrete);
    credentialsProvider = new AWSStaticCredentialsProvider(credentials);

    return AWSKMSClientBuilder.standard()
            .withCredentials(credentialsProvider)
            .withRegion(Regions.<aws region from your console>)
            .build();
}
private final AWSKMS kmsClient;

public String encrypt(String input) throws Exception {
    String kmskey = <KMS key arn from aws console>
    ByteBuffer plaintext = ByteBuffer.wrap(input.getBytes(StandardCharsets.UTF_8));
    EncryptRequest req = new EncryptRequest().withKeyId(kmskey).withPlaintext(plaintext);
    ByteBuffer ciphertext = kmsClient.encrypt(req).getCiphertextBlob();
    String data = Base64.getUrlEncoder().encodeToString(ciphertext.array());
    return data;
}
public String decrypt(String input) throws Exception {
    String kmskey = <from console>
    byte cipherBytes[] = Base64.getUrlDecoder().decode(input);
    ByteBuffer cipherBuffer = ByteBuffer.wrap(cipherBytes);
    DecryptRequest req = new DecryptRequest().withKeyId(kmskey).withCiphertextBlob(cipherBuffer);
    DecryptResult resp = kmsClient.decrypt(req);
    return new String(resp.getPlaintext().array(), Charset.forName("UTF-8"));

}

Amazon affiliated marketing by WordPress Website with SEO

Learn creating Domain, Hosting  website and creating E-Commerce Store using WordPress. Explore the idea with SEO and Rank your website to earn Passive Income by Building an Amazon Affiliate website with lot’s of light weight plugging’s  and oceanWS Theme, without any Coding.

Learn Domain Selection, KeyWord Selection and creating Do-Follow Backlinks to increase Rank and Domain authority of your website.

By taking this course not only will you be able to build a complete Amazon Affiliated e-commerce website, but you’ll also learn how to populate it with millions of top rated Amazon products and Rank high in Google. You’ll learn to create sales and earn a generous commission on each sale.

Let’s go through a quick overview of exactly what you will learn in this course:

  • Section 1:
    Includes an introduction to the Course with finding a best Deal for Domain and Hosting server purchase.
  • Section 2:
    Introduction to WordPress site maker and admin view. Installing SSL cert and oceanWp theme for your site.With creating amazon store and first post.
  • Section 3
    Very Important, what we should avoid to do.
  • Section 4:
    In this section we will learn What to build for a Good Website i.e. keyword Selection and their importance. Adding Rank Math SEO plugin and Google site plugin for analytics and Google search results.
  • Section 5:What are backlinks, what is difference between DoFollow and NpFollow links and above all what are techniques to get DoFollow links.
  • Section 6:
    It’s a site maintenance section, where we check where we need to go for site updates, backup and how to connect our site deployment server via a SFTP.

Backlinks

A backlink is a link created when one website links to another. Backlinks are also called “inbound links” or “incoming links.” Backlinks are important to SEO and as their number increases it boost confidence in site content results increase in site Google Rank.

Backlinks are of two type Dofollow and Nofollow.

Dofollow

<a href=”https://learningsubway.com/”>Link Text</a>

Dofollow links are the type of backlink that we want. Just keep in mind that those coming from respected sites hold the most value. This kind of backlink can help improve your search engine rankings.

Nofollow

<a href=”https://learningsubway.com” rel=”nofollow”>Link Text</a>

Nofollow tag tells search engines to ignore a link. They don’t pass any value from one site to another. So, typically they aren’t helpful in improving your search rank or visibility.

Techniques to create Backlinks

  • Update Udemy profile with website link.
  • Update Twitter profile with website link.
  • Create account on amazon community and add website link in your profile.
  • Create blogs on wordpress, wix or wordpress(Free).
  • Create facebook page and add your posts.

SEO

What Is SEO?

SEO stands for search engine optimization. SEO is the technique of taking steps to help a website or piece of content rank high in search engine like Bing, Google etc.

What are search statistics of different Search Engines ?

Search Engines% used in 2020
Google72%
Bing12%
Baidu11%
Yandexwhat all remaining

Regional Distribution of Google Search Traffic

Area% search traffic
USA27%
UK4.12%
Brazil4.58%
Japan4.12%
India3.9%

% click on Google search results by position

CLR – click-through rate

Page Position% click
128.5%
215.7%
311%
48.0%
57.2%
65.1%
74%
83.2%
92.8%
102.5%

Ways to Improve your Site’s Ranking

  • Create and Publish relevant and authoritative Content.
  • Content must be updated regularly.
  • Use alt tags.
  • Attract links from other websites, blogs, online forums and comments.
  • Create Metadata(Title, Description and keyword) rich website.

Read & Write data into AWS SQS using Java

Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS moves data between distributed application components and helps you decouple these components.
Before going to development install AWS SDK.
For basic theory check my page AWS SQS.
In application.yml 
sqs:
region: ap-south-1
accessKeyId: arunsinghgujjar
secretAccessKey: jainpurwalearunsingh/saharanpursepauchepuna
cloud:
aws:
end-point:
uri: https://arun-learningsubway-1.amazonaws.com/9876974864/learningsubway_SQS.fifo
queue:
max-poll-time: 20
max-messages: 10
fetch-wait-on-error: 60
enabled: true
content: sqs
In Java need to create two classes one foe send message and another to read & delete message from queue
To Send SQS Message
public static class SqsConfig {
private String region;
private String accessKeyId;
private String secretAccessKey;
}
public class SendMessageByLearningsubway {
@Value("${cloud.aws.queue.enabled}")
private Boolean enabled;
@Value("${cloud.aws.end-point.uri}")
private String url;
@Value("${cloud.aws.queue.content}")
private String queueType;
@Value("${cloud.aws.queue.max-poll-time}")
private Integer maxPollTime;
@Value("${cloud.aws.queue.max-messages}")
private Integer maxMessages;
@Value("${cloud.aws.queue.fetch-wait-on-error}")
private Integer fetchWaitOnError;
@Autowired
public SqsClient sqsClient;
public String sendMessage(MessageDistributionEvent messageDistributionEvent) {
SendMessageResponse sendMessage = null;
try {
Map<String, MessageAttributeValue> attributes = new HashMap<>();

String recepList = "";
for (Integer myInt : messageDistributionEvent.getRecipients()) {
recepList = recepList + "_" + myInt;
}
SendMessageRequest sendMsgRequest = SendMessageRequest.builder()
.queueUrl(url)
.messageBody(messageDistributionEvent.getChannelId() + "_" + messageDistributionEvent.getMessageId() + "" + recepList)
.messageGroupId("1")
.messageAttributes(attributes)
.build();
sendMessage = sqsClient.sendMessage(sendMsgRequest);
} catch (Exception ex) {
log.info("failed to send message :" + ex);
}
return sendMessage.sequenceNumber();
}
}
To Read and Delete Message
If message is not deleted, it will remain there in queue and possibility of retransmission.
public class ReceiveMessageLearningsubway {
@Value("${cloud.aws.queue.enabled}")
private Boolean enabled;
@Value("${cloud.aws.end-point.uri}")
private String url;
@Value("${cloud.aws.queue.content}")
private String queueType;
@Value("${cloud.aws.queue.max-poll-time}")
private Integer maxPollTime;
@Value("${cloud.aws.queue.max-messages}")
private Integer maxMessages;
@Value("${cloud.aws.queue.fetch-wait-on-error}")
private Integer fetchWaitOnError;
private boolean running = false;
@Autowired
public SqsClient sqsClient;
@PostConstruct
public void start() {
if (enabled && queueType.equals("sqs")) {
running = true;
new Thread(this::startListener, "sqs-listener").start();
}
}
@PreDestroy
public void stop() {
running = false;
}
private void startListener() {
while (running) {
try {
ReceiveMessageRequest receiveMessageRequest = ReceiveMessageRequest.builder()
.queueUrl(url)
.waitTimeSeconds(maxPollTime)
.maxNumberOfMessages(maxMessages)
.messageAttributeNames("MessageLabel")
.build();
List<Message> sqsMessages = sqsClient.receiveMessage(receiveMessageRequest).messages();
for (Message message : sqsMessages) {
try {
String body = message.body();
String[] data = body.split("_");
List<Integer> listRecipient = new LinkedList<>();
for (int i = 2; i < data.length; i++) {
log.info("RecepId:" + data[i]);
listRecipient.add(Integer.parseInt(data[i]));
}
//data read from queue by learningsubway.com
System.out.println(Integer.parseInt(data[0]), data[1], listRecipient);
sqsClient.deleteMessage(DeleteMessageRequest.builder()
.queueUrl(url)
.receiptHandle(message.receiptHandle())
.build());
} catch (Exception e) {
log.error("Failed to process ", message.messageId());
}
}
} catch (Exception e) {
log.warn("Error in fetching messages from SQS Queue. Will sleep and retry again.", e);
try {
Thread.sleep(fetchWaitOnError * 1000);
} catch (InterruptedException ie) {
log.error("Unable to sleep the sqs-listener", e);
}
}
}
}
public String sendMessage(QueueMessage message) {
SendMessageResponse sendMessage = null;
try {
log.info("send message on sqs");
Map<String, MessageAttributeValue> attributes = new HashMap<>();
log.info("send message on SQS: " + message);
attributes.put("ContentBasedDeduplication", MessageAttributeValue.builder().dataType("String").stringValue("true").build());
attributes.put("MessageLabel", MessageAttributeValue.builder()
.dataType("String")
.stringValue(message.getMsgId())
.build());
SendMessageRequest sendMsgRequest = SendMessageRequest.builder()
.queueUrl(url)
.messageBody(message.getMsgId())
.messageGroupId("1")
.messageAttributes(attributes)
.build();
sendMessage = sqsClient.sendMessage(sendMsgRequest);
log.info("data sent to queue: " + sendMessage.sequenceNumber());
} catch (Exception ex) {
log.info("failed to send message :" + ex);
}
return sendMessage.sequenceNumber();
}
}

FileBeat 401 Unauthorized Error with AWS Elasticsearch

Overview

Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.

Here’s how Filebeat works: When you start Filebeat, it starts one or more inputs that look in the locations you’ve specified for log data. For each log that Filebeat locates, Filebeat starts a harvester. Each harvester reads a single log for new content and sends the new log data to libbeat, which aggregates the events and sends the aggregated data to the output that you’ve configured for Filebeat.

If you are getting below error while importing data into AWS Elasticsearch directly from Filebeat, then this post is for you!

Exiting: 1 error: error loading index pattern: returned 401 to import file: . Response: {“statusCode”:401,”error”:”Unauthorized”,”message”:”Authentication required”}
Exiting: 1 error: error loading index pattern: returned 401 to import file: . Response: {“statusCode”:401,”error”:”Unauthorized”,”message”:”Authentication required”}

This issue comes if you are approaching AWS Elasticsearch with username/password security as

setup.kibana:
host: “https://arun-learningsubway-abxybalglzl3zmkmiq4.ap-south-1.es.amazonaws.com:443/_plugin/kibana/”

output.elasticsearch:
protocol: https
hosts: [“arun-learningsubway-workapps-abxybalglzl3zmkmiq4.ap-south-1.es.amazonaws.com:9200”]
username: “myUsername”
password: “myPassword”
index: “nginx_index_by_arun”

Solution

In Aws, while configuring your Elasticsearch service configure it for whitelisting of IP instead of Master User.

or

Configure FileBeat–> Logstash–>Elasticsearch with master username/password also it will work.

FileBeat and Logstash to insert Data into AWS Elasticsearch

FileBeat to insert Data into Logstash, and Logstash to insert Data into Elasticsearch

*Important point here is latest Elasticsearch version supported on AWS is 7.10, so Logstash and FileBeat must also be on same version.

If not then there will be a possibility of version compatibility.

* If latest version of ES available is x and you are not on cloud then also keep at least (x-1) version on production. It will keep you safe in production and away from product bugs to a lot extent.

Click and Download Filebeat 7.10 and Logstash7.10

Configuration of FileBeat to insert nginx logs into Logstash

Open filebeat.yml in any editor of your choice from location

/etc/filebeat/ on Linux or

C:\Program Files\filebeat-7.10.0 on windows

filebeat:
inputs:
– paths:
– E:/nginx-1.20.1/logs/.log
input_type: log


filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml


output:
logstash:
hosts: [“localhost:5044”]

Logstash Configuration

input {
beats {
port => 5044
ssl => false
}
}

filter {
grok {
match => [ “message” , “%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}”]
overwrite => [ “message” ]
}
mutate {
convert => [“response”, “integer”]
convert => [“bytes”, “integer”]
convert => [“responsetime”, “float”]
}
geoip {
source => “clientip”
target => “geoip”
add_tag => [ “nginx-geoip” ]
}
date {
match => [ “timestamp” , “dd/MMM/YYYY:HH:mm:ss Z” ]
remove_field => [ “timestamp” ]
}
useragent {
source => “agent”
}
}

output {
elasticsearch {
hosts => [“https://arun-learningsubway-ybalglooophuhyjmik3zmkmiq4.ap-south-1.es.amazonaws.com:443”]
index => “arun_nginx”
document_type => “%{[@metadata][type]}”
user => “myusername”
password => “mypassword”
manage_template => false
template_overwrite => false
ilm_enabled => false
}
}

Commands to Run on run Windows

To run Nginx
cd D:\nginx
start nginx
–to kill nginx process
taskkill /IM “nginx.exe” /F

To run Filebeat

To enable module
.\filebeat.exe modules enable nginx

C:\Program Files\filebeat-7.10.0> .\filebeat.exe -e

To run Logstash

C:\logstash> .\bin\logstash.bat -f .\config\logstash.conf

UDP

User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of Internet Protocol suite, referred as UDP/IP suite.

UDP uses a simple connectionless communication model with a minimum of protocol mechanisms.

UDP provides checksums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram.

It has no handshaking dialogues, and thus exposes the user’s program to any unreliability of the underlying network; there is no guarantee of delivery, ordering, or duplicate protection.

Where we can use UDP Protocol ?

After sending a packet, UDP protocol does not say sender will wait for acknowledgment and will send another packet.

Use cases are

When one server in Distributed server sends time token/living life time token to another server. In this case if packet lost in between server should not wait for acknowledgment and resend packet since time is updated now so sending new packet here will be helpful.

Udp packet is small, so it can be used in very small bandwidth.

Difference between TCP and UDP ?

UDP header is 8-bytes fixed and simple header, while for TCP it may vary from 20 bytes to 60 bytes.

In TCP protocol, after a packet is send, sender will wait for acknowledgment. If acknowledgment will not come, sender has to resend the packet.

TCP protocol is more reliable and must be used if we can’t bear packet loss. UDP needs to be used if packet need to be send fast even in small bandwidth and if some packets get’s lost there must be no issue.

My Udemy Courses

Click below images/name to buy courses and get discounts

ElasticSearch Logstash Kibana

Elasticsearch Logstash Kibana and Beats tutorial with DSL Queries, Aggregator & Tokenizer.

For Basic User

1) Author Introduction and Course Description

2) Introduction of complete ELK stack and different types of beats with internals of Elasticsearch and Lucene Indexing.

3) Installation of Elastic search and Kibana on Windows server

For Advance User

4) Data Ingestion from Mysql, Oracle, Apache, Rest API, & Nginx logs using Logstash & Filebeat with live examples.

5) Kibana for data visualization and dashboard (creation,monitoring & sharing) + Metricbeat + WinlogBeat (Installation, Data Ingestion and Dashboard Management)

6) DSL, Aggregation and Tokenizer Queries

Kafka

Learn Kafka – Kafka Connect – Kafka Stream with Hands-On Examples and cases studies.

Learning environment is setup with three node cluster to give production level environment for learning and growing and connect all kafka dots from CLI to Kafka Connect and Stream processing under one course keeping in mind for students from beginner level till expert.

Data Structure

This course is specially designed for Java Learners who want to program their favorite Algorithms in Java and need to learn new trick of Lambda Expression above Java Collections.

The Best part of this course is the wired Interview Problems which are first explained and then programmed using Java.

Each Algorithm is deeply explained and analyzed for it’s best use.

AWS SQS Bucket

Amazon Simple Queue Service (SQS) is a fully managed message queuing service .

SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work.

Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. Get started with SQS in minutes using the AWS console, Command Line Interface or SDK of your choice, and three simple commands.

SQS offers two types of message queues.

  • Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery.
  • SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.

Q: How is Amazon SQS different from Amazon SNS?

Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.

Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components.

Q: How is Amazon SQS different from Amazon MQ?

If you’re using messaging with existing applications, and want to move your messaging to the cloud quickly and easily, recommendation is to consider Amazon MQ. It supports industry-standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications.

If you are building brand new applications in the cloud, we recommend you consider Amazon SQS and Amazon SNS. Amazon SQS and SNS are lightweight, fully managed message queue and topic services that scale almost infinitely and provide simple, easy-to-use APIs. 

NASA and BMW is one of the user of AWS SQS and SNS 🙂

Q: How is Amazon SQS different from Amazon Kinesis Streams?

Amazon Kinesis Streams allows real-time processing of streaming big data and the ability to read and replay records to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications that read from the same Amazon Kinesis stream (for example, to perform counting, aggregation, and filtering).

Q) What is the cost model of AWS SQS ?

Ans) Check it yourself

Know Your Elasticsearch!

Q)What is ElasticSearch?

Ans) Elasticsearch is a distributed, free and analytics engine for all types of data, including textual, numerical, geospatial<geo location>, structured, and unstructured.

Elasticsearch is built on Apache Lucene and was first released in 2010 by Elasticsearch N.V. (now known as Elastic).

We can use REST APIs other than ample of tools for data ingestion, enrichment, storage, analysis, and visualization. Due to Rest API CRUD feature it is easy to integrate with coding Languages/Platforms like Java, Python or Spring Boot.

Q) Two Case study to use Elastic Search or Areas where we can use it ?

Ans) a)ELK Stack

Any application generates logs. We can monitor application with use of Logstash. It will store these logs into Elasticsaerch. To see data after insertion we can see it in Kibana. In kibana we can write queries to analyze data and analyze it in data or graphical form.

b) Searching text in Java Application

I have blog, I can insert it’s content in Elasticsearch using Java(Rest CRUD api).

Above that I can use Elasticsearch data(JPA) to search text in Elasticsearch and bring related results back as result of get api.

Q) Is Elasticsearch as a NoSQL Database ?

Ans) Yes

Q) Is Elasticsearch is build upon Lucene engine ?

Ans) Yes

Q) Terminologies of Elasticsearch ?

Ans) field, document, index and cluster .

Q) Map above Elasticsearch terminologies with RDBMS ?

Ans) Elasticsearch RDBMS

Cluster Database

index table

document row

field column

● Cluster: A cluster is a collection of one or more nodes that together holds the entire data. It provides federated indexing and search capabilities across all nodes and is identified by a unique name (by default it is ‘elasticsearch’).

● Node: A node is a single server which is a part of cluster, stores data and participates in the cluster’s indexing and search capabilities.

● Index: An index is a collection of documents with similar characteristics and is identified by a name. This name is used to refer to the index while performing indexing, search, update, and delete operations against the documents in it.

● Type: A type is a logical type of an index whose semantics is complet. It is defined for documents that have a set of common fields. you can define more than one type in your index.

● Document: A document is a basic unit of information which can be indexed. It is demonstrated in JSON which is a global internet data interchange format.

Documents also contain reserved fields that constitute the document metadata such as:

  1. _index – the index where the document resides
  2. _type – the type that the document represents
  3. _id – the unique identifier for the document

An example of a document:

{
   "_id": 3,
   “_type”: [“your index type”],
“_index”: [“your index name”],
"_source":{
   "age": 32,
   "name": ["arun”],
   "year":1989,
}
}

● Shards: Elasticsearch provides the ability to subdivide the index into multiple pieces called shards. Each shard is in itself a fully-functional and independent “index” that can be hosted on any node within the cluster

● Replicas: Elasticsearch allows you to make one or more copies of your index’s shards which are called replica shards or replica.

Q)Why Elasticsearch is faster in searching than file search/RDBMS search?

Ans) Its all depend on how these system store data instead of retrieval.

Let me explain, If I have 1000 blogs and in three of them I have word ShRaam

Then, RDBMS/File system will go per blog/page and search for entire content in these pages and then bring three which has this matching term.

While, Elastisearch make use of inverted Index i.e. it will store words of that pages as keys of that pages.

ShRaam–> Page x, y and z

So when you search for keyword ShRaam, it will simply bring those three page where it is present instead of searching in page content at time of requirement.

Q) Name three companies using Elasticsearch?

Ans) Netflix

Walmart

Ebay

SpringDataElasticsearch Queries

Add only spring-data-elasticsearch dependency
--------------------Model----------
public class ArunOrder {
@Id
private Integer id;
private Integer userId;
private String description;
private Boolean hidden;
@Field(type = FieldType.Date)
private Long createdDate;
@Field(type = FieldType.Date)
private Long updatedDate;
//getter and setter
}
------------------Repository--------------
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import org.springframework.stereotype.Repository;
import java.util.List;
@Repository
public interface MyOrderElastricSearchRepository extends ElasticsearchRepository<ArunOrder , Integer> {
//Search a text in order description any where
List<ArunOrder> findByDescriptionContaining(String subject);
//Search a text in description for order of particular user only 
List<ArunOrder> findByDescriptionContainingAndUserId(String subject, Integer userId);

//Search a text in description for visible order of particular user only 
List<ArunOrder> findByDescriptionContainingAndUserIdAndHiddenFalse
(String subject, Integer userId);
}
//Search a text in description for visible messages of particular user only and sort by created date
List<ArunOrder> findByDescriptionContainingAndUserIdAndHiddenFalseOrderByCreatedDateDesc
(String subject, Integer userId);
}

Application.yml and jars required for SpringBoot 2.4 + with Elasticsearch 7.1 +

Build.Gradle

ext {
springBootVersion = '2.5.0'
}
dependencies {
implementation('org.springframework.boot:spring-boot-starter-web') {
exclude module: "spring-boot-starter-tomcat"
}
implementation ('org.springframework.data:spring-data-elasticsearch')
}

application.yml
elasticsearch:
rest:
uris: http://localhost:9200
or with username and password
elasticsearch:
rest:
uris: https://MyHost:9200
username: arun
password: mypassword

Elasticsearch Java Config

package com.arun.config;
import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.elasticsearch.client.ClientConfiguration;
import org.springframework.data.elasticsearch.client.RestClients;
import org.springframework.data.elasticsearch.core.ElasticsearchOperations;
import org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate;
import org.springframework.data.elasticsearch.repository.config.EnableElasticsearchRepositories;
@Configuration
@EnableElasticsearchRepositories(basePackages = "com.arun.elasticsearch.repository")
@ComponentScan(basePackages = { "com.arun.elasticsearch.service" })
public class ElasticSearchConfig {
private static final String HOST = "aws_host";//localhost
private static final int PORT = 443;/9092
private static final String PROTOCOL = "https";//http
@Bean
public RestHighLevelClient client() {
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials("UserName", "Password"));
RestClientBuilder builder = RestClient.builder(new HttpHost( HOST, PORT, PROTOCOL))
.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
}
});
}
}

FFMPEG

Calling FFMPEG from Java

FFmpeg is the leading multimedia framework, able to decodeencodetranscodemuxdemuxstreamfilter and play pretty much anything that humans and machines have created. It supports the most obscure ancient formats up to the cutting edge. No matter if they were designed by some standards committee, the community or a corporation. It is also highly portable: FFmpeg compiles, runs, and passes our testing infrastructure FATE across Linux, Mac OS X, Microsoft Windows, the BSDs, Solaris, etc. under a wide variety of build environments, machine architectures, and configurations.

Way 1:
package ffmpeg;

import java.io.DataOutputStream;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;

import sun.rmi.runtime.Log;

public class myFFMPEG {
    //ffmpeg path :   C:/Users/arsingh/Desktop/try/ffmpeg
    //input img path: C:/Users/arsingh/Desktop/AnupData/sampleImages
    //input audio path :   C:/Users/arsingh/Desktop/AnupData/AAJ.mp3
    public static void main(String[] args) throws IOException {
System.out.println(“Creation Started ….k”);
//ffmpeg -r 1/5 -i img%03d.png -c:v libx264 -vf “fps=25,format=yuv420p” out.mp4
//ffmpeg -loop 1 -i img.jpg -i audio.wav -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest out.mp4
    //String cmd[] = {“C://Program Files (x86)//ffmpeg//ffmpeg”,”-i”,”C://Users//arsingh//Desktop//AnupData//sampleImages//img%03d.png”,”C://Users//arsingh//Desktop//AnupData//sampleImages//out.mp4″};
String cmd[] = {“C://Program Files (x86)//ffmpeg//ffmpeg”,”-r 1/5″,”-i”,”C://Users//arsingh//Desktop//AnupData//sampleImages//img%03d.jpg”,”-i”,”C://Users//arsingh//Desktop//AnupData//AAJ.mp3″,”C://Users//arsingh//Desktop//AnupData//sampleImages//out.mp4″};

//Runtime.getRuntime().exec(cmd);
convertImg_to_vid(cmd);
        System.out.println(“Video Created”);
   }//end of psvm

    static void convertImg_to_vid(String cmd[])
    {
        Process chperm;
        try {
            chperm=Runtime.getRuntime().exec(“su”);
              DataOutputStream os =
                  new DataOutputStream(chperm.getOutputStream());

                 // os.writeBytes(“ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg\n”);
                  os.writeBytes(cmd.toString());
                  os.flush();

                  chperm.waitFor();

        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

}

Way 2:
package ffmpeg;

import java.io.IOException;

public class ffMpegStich {

    /**
     * @param args
     * @throws IOException
     */
    public static void main(String[] args) throws IOException {

        System.out.println(“Creation Started ….k”);
        //String cmd[] = {“C://Program Files (x86)//ffmpeg//ffmpeg”,”-r 1/5″,”-i”,”C://Users//arsingh//Desktop//AnupData//sampleImages//img%03d.jpg”,”-i”,”C://Users//arsingh//Desktop//AnupData//AAJ.mp3″,”C://Users//arsingh//Desktop//AnupData//sampleImages//out.mp4″};
        String ffMpegPath=”C://Program Files (x86)//ffmpeg//ffmpeg”;
        String imagePath=”C://Users//arsingh//Desktop//AnupData//sampleImages//img%03d.jpg”;
        String audioPath=”C://Users//arsingh//Desktop//AnupData//AAJ.mp3″;
        String videoPath=”C://Users//arsingh//Desktop//AnupData//sampleImages//out.mp4″;
        String cmd[] = {ffMpegPath,”-r 1/5″,”-i”,imagePath,”-i”,audioPath,videoPath};

        Runtime.getRuntime().exec(cmd);
        System.out.println(“Video Created”);

    }

}

UDP

Implementing UDP Protocol for Broadcasting

UDP(User Datagram Protocol)uses a simple connectionless communication model with a minimum of protocol mechanisms. It has no handshaking dialogues, and thus exposes the user’s program to any unreliability of the underlying network; there is no promiseof delivery, ordering, or duplicate packet protection. UDP provides checksums for data integrity, and port numbers for addressing different functions at the source od packet transmission and destination of the datagram. If error-correction facilities are needed at the network interface level, an application may use Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) which are designed for this purpose.

Program 1.
import java.net.*;
import java.io.*;
import java.util.*;
public class BroadcastServer
{
public static final int PORT = 1200;
public static void main(String args[])throws Exception{
MulticastSocket socket;
DatagramPacket packet;
InetAddress address;
address = InetAddress.getByName(“228.5.6.7”);
socket = new MulticastSocket();
// join a Multicast group and send the group salutations
socket.joinGroup(address);
byte[] data = null;
for(;;)
{
 Thread.sleep(1000);
 System.out.println(“Sending “);
 String str = (new Date()).toString();
 data = str.getBytes();
 packet = new DatagramPacket(data,str.length(),address,PORT);
 //Sends the packet
 socket.send(packet);
 }
 // for
 }
 // main
 } // class BroadcastServer

 import java.net.*;
 import java.io.*;
  import java.util.*;

 public class BroadcastClient{
 public static final int PORT = 1200;
 public static void main(String[] args)throws Exception{

 MulticastSocket socket;
 DatagramPacket packet;
 InetAddress address = InetAddress.getByName(“228.5.6.7”);       
 socket = new MulticastSocket(PORT);

 //join a Multicast group and send the group salutations

 socket.joinGroup(address);
 byte[] data = new byte[256];
 packet = new DatagramPacket(data,data.length);

 for(;;)
 {                               
   // receive the packets
   socket.receive(packet);
   String str = new String(packet.getData(),0,packet.getLength());
   System.out.println(” Time signal received from”+ packet.getAddress() + ” Time is : ” +str);
 }  // for

 }  // main    

}  // class Broadcast


Program 2.
//1) 1 notepad
import java.io.*;

import java.net.*;

import java.util.*;


public class QuoteServerThread extends Thread {


protected DatagramSocket socket = null;

protected BufferedReader in = null;

 protected boolean moreQuotes = true;


public QuoteServerThread() throws IOException {

this(“QuoteServerThread”);

}


public QuoteServerThread(String name) throws IOException {

super(name);

socket = new DatagramSocket(4445);


try {

in = new BufferedReader(new FileReader(“us.java”));

} catch (FileNotFoundException e) {

            System.err.println(“Could not open quote file. Serving time instead.”);

 }
    }


 public void run() {


while (moreQuotes) {

try {
                byte[] buf = new byte[256];

                    // receive request
                DatagramPacket packet = new DatagramPacket(buf, buf.length);

socket.receive(packet);

                    // figure out response

String dString = null;

if (in == null)

dString = new Date().toString();

else

dString = getNextQuote();

buf = dString.getBytes();


// send the response to the client at “address” and “port”

InetAddress address = packet.getAddress();

int port = packet.getPort();

packet = new DatagramPacket(buf, buf.length, address, port);

socket.send(packet);

} catch (IOException e) {

e.printStackTrace();

moreQuotes = false;
            }
        }

 socket.close();
    }


protected String getNextQuote() {

String returnValue = null;

try {

if ((returnValue = in.readLine()) == null) {

in.close();
        moreQuotes = false;

returnValue = “No more quotes. Goodbye.”;

}
        }
catch (IOException e) {


returnValue = “IOException occurred in server.”;

}

return returnValue;

}
}
//2) 2 notepad


import java.io.*;

import java.net.*;

import java.util.*;


public class MulticastServerThread extends QuoteServerThread {


  private long FIVE_SECONDS = 5000;


public MulticastServerThread() throws IOException
 {
        super(“MulticastServerThread”);

  }


public void run() {

   while (moreQuotes) {

    try {

 byte[] buf = new byte[256];


  // construct quote

 String dString = null;

  if (in == null)

 dString = new Date().toString();

   else

 dString = getNextQuote();

     buf = dString.getBytes();


   // send it

 InetAddress group = InetAddress.getByName(“230.0.0.1”);

 DatagramPacket packet = new DatagramPacket(buf, buf.length, group, 4446);

             socket.send(packet);


  // sleep for a while

try {
            sleep((long)(Math.random() * FIVE_SECONDS));

    } catch (InterruptedException e) { }

          } catch (IOException e)
{
                e.printStackTrace();

moreQuotes = false;
            }

       }

socket.close();

}
}

//3 notepad

public class MulticastServer
{
    public static void main(String[] args) throws java.io.IOException
 {

 new MulticastServerThread().start();

 }
}


// 4 notepad

import java.io.*;
import java.net.*;

import java.util.*;


public class MulticastClient {


 public static void main(String[] args) throws IOException {


 MulticastSocket socket = new MulticastSocket(4446);

 InetAddress address = InetAddress.getByName(“230.0.0.1”);

socket.joinGroup(address);


 DatagramPacket packet;


 // get a few quotes

for (int i = 0; i < 5; i++) {


        byte[] buf = new byte[256];

           packet = new DatagramPacket(buf, buf.length);

          socket.receive(packet);


 String received = new String(packet.getData(), 0, packet.getLength());

           System.out.println(“Quote of the Moment: ” + received);

    }


socket.leaveGroup(address);

socket.close();

 }

}
/* on one comand promp> javac QuoteServerThread.java
>javac MulticastServer
>javc MulticastServer
==>make sure to change input file name in code of QuoteServerThread code
on second command promp>javac MulticastClient.java
>java MulticastClient

*/UDP provides an unreliable packet delivery system built on top of the IP
protocol. As with IP, each packet is an individual, and is handled separately.
Because of this, the amount of data that can be sent in a UDP packet is limited to
the amount that can be contained in a single IP packet. Thus, a UDP packet
contain at most 65507 bytes (this is the 65535-byte IP packet size minus theminimum IP header of 20 bytes and minus the 8-byte UDP header).UDP packets can arrive out of order or not at all. No packet has any knowledge of
the preceding or following packet. The recipient does not acknowledge packets, sothe sender does not know that the transmission was successful. UDP has noprovisions for flow control–packets can be received faster than they can be used.We call this type of communication connectionless because the packets have norelationship to each other and because there is no state maintained.


The destination IP address and port number is encapsulated in each UDP packet.
These two numbers together uniquely identify the recipient and are used by the
underlying operating system to deliver the packet to a specific process
(application).

One way to think of UDP is by analogy to communications via a letter.
 You write
the letter (this is the data you are sending);
 put the letter inside an envelope (the
UDP packet); address the envelope (using an IP address and a port number);
 put
your return address on the envelope (your local IP address and port number)
 and then you send the letter.
Like a real letter, you have no way of knowing whether a UDP packet was
received. If you send a second letter one day after the first, the second one may be received
before the first. Or, the second one may never be received.
So why use UDP if it unreliable? Two reasons: speed and overhead.


UDP packets

have almost no overhead–you simply send them then forget about them. And
they are fast, since there is no acknowledgement required for each packet. Keep in

mind the degree of unreliability we are talking about. For all practical purposes, an

Ethernet breaks down if more than about 2 percent of all packets are lost. So,


when we say unreliable, the worst-case loss is very small.
UDP is appropriate for the many network services that do not require guaranteed

delivery. An example of this is a network time service. Consider a time daemon
that issues a UDP packet every second so computers on the LAN can
synchronize their clocks. If a packet is lost, it’s no big deal–the next one will be

by in another second and will contain all necessary information to accomplish the
task.

Another common use of UDP is in networked, multi-user games, where a player’s

position is sent periodically. Again, if one position update is lost, the next one

will contain all the required information.

A broad class of applications is built on top of UDP using streaming protocols.

With streaming protocols, receiving data in real-time is far more important than

guaranteeing delivery. Examples of real-time streaming protocols are RealAudio
and RealVideo which respectively deliver real-time streaming audio and video over

the Internet. The reason a streaming protocol is desired in these cases is because if


an audio or video packet is lost, it is much better for the client to see this as noise

or “drop-out” in the sound or picture rather than having a long pause while the

client software stops the playback, requests the missing data from the server.

That would result in a very choppy, bursty playback which most people fin

unacceptable, and which would place a heavy demand on the server.

Creating UDP ServersTo create a server with UDP, do the following:Create a attached to a port.1.DatagramSocketint port = 1234;
DatagramSocket socket = new DatagramSocket(port);Allocate space to hold the incoming packet, and create an instance of

2.to hold the incoming data.
DatagramPacket byte[] buffer = new byte[1024];DatagramPacket packet = new DatagramPacket(buffer, buffer.length);

3.Block until a packet is received, then extract the information you need from the packet.


// Block on receive()

socket.receive(packet);


// Find out where packet came from


// so we can reply to the same host/port

InetAddress remoteHost = packet.getAddress

int remotePort = packet.getPort();


// Extract the packet data

byte[] data = packet.getData();


The server can now process the data it has received from the client, and issue an

appropriate reply in response to the client’s request.

Creating UDP Clients
Writing code for a UDP client is similar to what we did for a server. Again, we
need a and a


. The only real difference is that

DatagramSocket

DatagramPacket

we must specify the destination address with each packet, so the form of the

constructor used here specifies the destination host and port

DatagramPacket

number. Then, of course, we initially send packets instead of receiving.

First allocate space to hold the data we are sending and create an instance of

1.to hold the data.

DatagramPacket
 byte[] buffer = new byte[1024];

int port = 1234;

InetAddress host = InetAddress.getByName(“magelang.com”);

DatagramPacket packet =

new DatagramPacket(buffer, buffer.length, host, port);
Create a
and send the packet using this socket.


2.

DatagramSocket

DatagramSocket socket = new DatagramSocket();

socket.send(packet);


The constructor that takes no arguments will allocate a free local

DatagramSocket

port to use. You can find out what local port number has been allocated for your


socket, along with other information about your socket if needed.

// Find out where we are sending from

InetAddress localHostname = socket.getLocalAddress();

int localPort = socket.getLocalPort();

The client then waits for a reply from the server. Many protocols require t

server to reply to the host and port number that the client used, so the client c
now invoke to wait for information from the server.



socket.receive();



TCP guarantees the delivery o f packets and preserves their order on destination. Sometimes these features are not required and it since they do not come without performance costs, it would be use a lighter transport protocol. This kind of service is accomplished by the UDP protocol which conveys datagram packets .Datagram packets are used to implement a connectio nless packet delivery service suppo rted by the UDP protocol. Each message is transferred from source machine to destination based on


information contained within that packet. That means, each packet needs to have destination address and each packet might be routed differently, and might arrive in an y order. Packet delivery is not guaranteed.


The format of datagram packet is:

| Msg | length | Host | serverPort |


Java supports datagram communication through the following classes:
• DatagramPacket
• DatagramSocket

The class contains several constructors that can be used for creating packet
object. One of them is:
DatagramPacket (byte[] buf, int length,InetAddress address, int port);

This constructor is used for creating a datagram packet for sending packets o f length length to the specified port number on the specified ho st. The message to be transmitted is indicated in the first argument. The key methods of class DatagramPacket are:

byte[] getData()
-Returns the data buffer.
int getLength()
-Returns the length of the data to be sent or the length of the data received.
void setData(byte[] buf)
-Sets the data buffer for this packet.
void setLength(int length)
-Sets the length for this packet.


The class DatagramSocket supports various metho ds that can be used for transmitting or receiving data a datagram o ver the network. The two key methods are:
void send(DatagramPacket p)
-Sends a datagram packet from this socket.
void receive(DatagramPacket p)
Receives a datagram packet from this socket.

A simple UDP server program that waits for client’s requests and then accepts the message (datagram) and sends back the same message is given below.


// UDPServer.java: A simple UDP server program.
import java.net.*;
import java.io.*;
public class UDPServer {

public static void main(String args[]){
DatagramSocket aSocket = null;
if (args.length < 1) {
System.out.println(“Usage: java UDPServer <Port Number>”);
System.exit(1);
}


try {
int socket_no = Integer.valueOf(args[0]).intValue();
DatagramSocket asocket = new DatagramSocket(socket_no);
byte[] buffer = new byte[1000];

while(true) {
DatagramPacket request = new DatagramPacket(bufferbuffer.length);
aSocket.receive(request);
DatagramPacket reply = new DatagramPacket(request.getData(),
request.getLength(),request.getAddress(),
request.getPort());
aSocket.send(reply);
}
}
catch (SocketException e) {
System.out.println(“Socket: ” + e.getMessage());
}


catch (IOException e) {
System.out.println(“IO: ” + e.getMessage());
}


finally {
if (aSocket != null)
aSocket.close();
}
}
}





import java.net.*;
import java.io.*;
public class UDPClient {


public static void main(String args[]){
// args give message contents and server hostname
DatagramSocket aSocket = null;
if (args.length < 3) {
System.out.println(
“Usage: java UDPClient <message> <Host name> <Port number>”);
System.exit(1);
}


try {
aSocket = new DatagramSocket();
byte [] m = args[0].getBytes();
InetAddress aHost = InetAddress.getByName(args[1]);
int serverPort = Integer.valueOf(args[2]).intValue();


DatagramPacket request =
new DatagramPacket(m, args[0].length(), aHost, serverPort);
aSocket.send(request);
byte[] buffer = new byte[1000];
DatagramPacket reply = new DatagramPacket(buffer, buffer.length);
aSocket.receive(reply);
System.out.println(“Reply: ” + new String(reply.getData()));
}


catch (SocketException e) {
System.out.println(“Socket: ” + e.getMessage());
}
catch (IOException e) {
System.out.println(“IO: ” + e.getMessage());
}


finally {
if (aSocket != null)
aSocket.close();
}
}
}

Program 2)Practical usage of UDP protocol
import java.net.*;
import java.io.*;

public class EchoServer
{
//Initialize Port number and Packet Size
static final int serverPort = 1026;
static final int packetSize = 1024;
public static void main(String args[])throws SocketException{
DatagramPacket packet;
DatagramSocket socket;
byte[] data; // For data to be Sent in packets


int clientPort;
InetAddress address;
String str;
socket = new DatagramSocket(serverPort);


for(;;)
{
data = new byte[packetSize];

// Create packets to receive the message
packet = new DatagramPacket(data,packetSize);
System.out.println(“Waiting to receive the packets”);
try{
// wait infinetely for arrive of the packet
socket.receive(packet);
}catch(IOException ie)
{
System.out.println(” Could not Receive :”+ie.getMessage());
System.exit(0);
}
// get data about client in order to echo data back
address = packet.getAddress();
clientPort = packet.getPort();
// print string that was received on server’s console
str = new String(data,0,0,packet.getLength());
System.out.println(“Message :”+ str.trim());
System.out.println(“From :”+address);
// echo data back to the client
// Create packets to send to the client
packet = new DatagramPacket(data,packetSize,address,clientPort);
try
{
// sends packet
socket.send(packet);
}
catch(IOException ex)
{
System.out.println(“Could not Send : “+ex.getMessage());
System.exit(0);
}
} // for loop
} // main

} // class EchoServer


// Client Program


import java.net.*;


import java.io.*;

public class EchoClient{
static final int serverPort = 1026;
static final int packetSize = 1024;

public static void main(String args[]) throws UnknownHostException, SocketException
{
DatagramSocket socket; //How we send packets
DatagramPacket packet; //what we send it in
InetAddress address; //Where to send
String messageSend; //Message to be send
String messageReturn; //What we get back from the Server
byte[] data;
//Checks for the arguments that sent to the java interpreter
// Make sure command line parameters correctr
if(args.length != 2)
{
System.out.println(“Usage Error :Java EchoClient < Server name> < Message>”);
System.exit(0);
}
// Gets the IP address of the Server
address = InetAddress.getByName(args[0]);
socket = new DatagramSocket();
data = new byte[packetSize];
messageSend = new String(args[1]);
messageSend.getBytes(0,messageSend.length(),data,0);
// remember datagrams hold bytes
packet = new DatagramPacket(data,data.length,address,serverPort);
System.out.println(” Trying to Send the packet “);
try
{
// sends the packet
socket.send(packet);
}
catch(IOException ie)
{
System.out.println(“Could not Send :”+ie.getMessage());
System.exit(0);
}
// packet is reinitialized to use it for recieving
packet = new DatagramPacket(data,data.length);
try
{

// Receives the packet from the server
socket.receive(packet);
}
catch(IOException iee)
{
System.out.println(“Could not receive : “+iee.getMessage() );
System.exit(0);
}
// display message received
messageReturn = new String (packet.getData(),0);
System.out.println(“Message Returned : “+ messageReturn.trim());
} // main
} // Class EchoClient

Websocket

The web has been largely built around the so-called request/response paradigm of HTTP. A client loads up a web page and then nothing happens until the user clicks onto the next page. Around 2005, AJAX started to make the web feel more dynamic. Still, all HTTP communication was steered by the client, which required user interaction or periodic polling to load new data from the server.

Technologies that enable the server to send data to the client in the very moment when it knows that new data is available have been around for quite some time. They go by names such as “Push” or “Comet”. One of the most common hacks to create the illusion of a server initiated connection is called long polling. With long polling, the client opens an HTTP connection to the server which keeps it open until sending response. Whenever the server actually has new data it sends the response (other techniques involve Flash, XHR multipart requests and so called htmlfiles). Long polling and the other techniques work quite well. You use them every day in applications such as GMail chat. However, all of these work-arounds share one problem: They carry the overhead of HTTP, which doesn’t make them well suited for low latency applications. Think multiplayer first person shooter games in the browser or any other online game with a realtime component.

WebSocket is a protocol providing full-duplex communications channels over a single TCP connection. The WebSocket protocol was standardized by the IETF as RFC 6455 in 2011, and the WebSocket API in Web IDL is being standardized by the W3C.

In addition, the communications are done over TCP port number 80, which is of benefit for those environments which block non-web Internet connections using a firewall. WebSocket protocol is currently supported in several browsers including Google Chrome, Internet Explorer, Firefox, Safari and Opera. WebSocket also requires web applications on the server to support it.

WebSocket Attributes:

Following are the attribute of WebSocket object. Assuming we created Socket object as mentioned above:

AttributeDescription
Socket.readyStateThe readonly attribute readyState represents the state of the connection. It can have the following values:
A value of 0 indicates that the connection has not yet been established. A value of 1 indicates that the connection is established and communication is possible. A value of 2 indicates that the connection is going through the closing handshake. A value of 3 indicates that the connection has been closed or could not be opened.
Socket.bufferedAmountThe readonly attribute bufferedAmount represents the number of bytes of UTF-8 text that have been queued using send() method.

WebSocket Events:

Following are the events associated with WebSocket object. Assuming we created Socket object as mentioned above:

EventEvent HandlerDescription
openSocket.onopenThis event occurs when socket connection is established.
messageSocket.onmessageThis event occurs when client receives data from server.
errorSocket.onerrorThis event occurs when there is any error in communication.
closeSocket.oncloseThis event occurs when connection is closed.

WebSocket Methods:

Following are the methods associated with WebSocket object. Assuming we created Socket object as mentioned above:

MethodDescription
Socket.send()The send(data) method transmits data using the connection.
Socket.close()The close() method would be used to terminate any existing connection.

Example:
This is an advance example of taking the snapshot on client side and sending the data to server using websocket.
To capture the client snapshot we used html2Canvas of HTML5.
http://html2canvas.hertzen.com/
https://github.com/niklasvh/html2canvas
//please  download html2ccanvas source file from above link for screencapture functionality and put in webcontent folder with your html files.

Create a dynamic program in Eclipse.
 1)wsocket.html
<!DOCTYPE html PUBLIC “-//W3C//DTD HTML 4.01 Transitional//EN” “http://www.w3.org/TR/html4/loose.dtd”>
<!– Arun HTML File –>>
<html>
<head>
<meta charset=”utf-8″>
<title>Tomcat web socket</title>
<script src=”http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.js”></script>
<script type=”text/javascript” src=”html2canvas.js?rev032″></script>
<script type=”text/javascript”>   
var ws = new WebSocket(“ws://localhost:8080/WebSocketSample/wsocket”);
ws.onopen = function () {
    console.log(“Web Socket Open”);
};

   ws.onmessage = function(message) {
       console.log(“MSG from Server :”+message.data);
//document.getElementById(“msgArea”).textContent += message.data + “\n”;   
document.getElementById(“msgArea”).textContent +” Data Send\n”;   
   };
 function postToServerNew(data) {
ws.send(JSON.stringify(data));
document.getElementById(“msg”).value = “”;
}

//Set Interval
setInterval(function(){
 var target = $(‘body’);
   html2canvas(target, {
     onrendered: function(canvas) {
     var data = canvas.toDataURL();
  var jsonData = {
        type: ‘video’,
        data: data,
        duration: 5 ,
        timestamp: 0,     // set in worker
        currentFolder: 0,// set in worker
    };
postToServerNew(jsonData);
   }
 });
},9000);

function closeConnect() {
ws.close();
console.log(“Web Socket Closed: Bye TC”);
}
</script>
</head>

<body>
  <div>
<textarea rows=”18″ cols=”150″ id=”msgArea” readonly></textarea>
</div>
<div>
<input id=”msg” type=”text”/>
<button type=”submit” id=”sendButton” onclick=”postToServerNew(‘Arun’)”>Send MSG</button>
</div>
</body>
</html>

2)
package sample;

import java.io.File;
import java.util.concurrent.ConcurrentHashMap;

import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpSession;

import org.apache.catalina.websocket.StreamInbound;
import org.apache.catalina.websocket.WebSocketServlet;

/**
 * WebSocketServlet is contained in catalina.jar. It also needs servlet-api.jar
 * on build path
 *
 * @author Arun
 *
 */
@WebServlet(“/wsocket”)
public class MyWebSocketServlet extends WebSocketServlet {

    private static final long serialVersionUID = 1L;

    // for new clients, <sessionId, streamInBound>
    private static ConcurrentHashMap<String, StreamInbound> clients = new ConcurrentHashMap<String, StreamInbound>();

    @Override
    protected StreamInbound createWebSocketInbound(String protocol, HttpServletRequest httpServletRequest) {

        // Check if exists
        HttpSession session = httpServletRequest.getSession();

        // find client
        StreamInbound client = clients.get(session.getId());
        if (null != client) {
            return client;

        } else {
            System.out.println(” session.getId() :”+session.getId());
            String targetLocation = “C:/Users/arsingh/Desktop/AnupData/DATA/”+session.getId();
            System.out.println(targetLocation);
            File fs=new File(targetLocation);
            boolean bool=fs.mkdirs();
            System.out.println(” Folder created :”+bool);
            client = new MyInBound(httpServletRequest,targetLocation+”/Output.txt”);
            clients.put(session.getId(), client);
        }

        return client;
    }

    /*public StreamInbound getClient(String sessionId) {
        return clients.get(sessionId);
    }

    public void addClient(String sessionId, StreamInbound streamInBound) {
        clients.put(sessionId, streamInBound);
    }*/
}

3)
package sample;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.ByteBuffer;
import java.nio.CharBuffer;
import java.nio.MappedByteBuffer;
import java.nio.channels.AsynchronousFileChannel;
import java.nio.channels.FileChannel;
import java.nio.charset.Charset;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;

import javax.servlet.http.HttpServletRequest;

import org.apache.catalina.websocket.MessageInbound;
import org.apache.catalina.websocket.WsOutbound;

/**
 * Need tomcat-koyote.jar on class path, otherwise has compile error
 * “the hierarchy of the type … is inconsistent”
 *
 * @author Arun
 *
 */
public class MyInBound extends MessageInbound {

    private String name;

    private WsOutbound myoutbound;

    private String targetLocation;

    public MyInBound(HttpServletRequest httpSerbletRequest, String targetLocation) {
        this.targetLocation = targetLocation;
    }

    @Override
    public void onOpen(WsOutbound outbound) {
        System.out.println(“Web Socket Opened..”);
        /*this.myoutbound = outbound;
        try {
            this.myoutbound.writeTextMessage(CharBuffer.wrap(“Web Socket Opened..”));

        } catch (Exception e) {
            throw new RuntimeException(e);
        }*/

    }

    @Override
    public void onClose(int status) {
        System.out.println(“Close client”);
        // remove from list
    }

    @Override
    protected void onBinaryMessage(ByteBuffer arg0) throws IOException {
        System.out.println(“onBinaryMessage Data”);
        try {
            writeToFileNIOWay(new File(targetLocation), arg0.toString() + “\n”);
        } catch (Exception e) {
            e.printStackTrace();
        } finally {

            //this.myoutbound.flush();
        }
    }// end of onBinaryMessage

    @Override
    protected void onTextMessage(CharBuffer inChar) throws IOException {
        System.out.println(“onTextMessage Data”);
        try {

            writeToFileNIOWay(new File(targetLocation), inChar.toString() + “\n”);

        } catch (Exception e) {
            e.printStackTrace();
        } finally {

            //this.myoutbound.flush();
        }
    }// end of onTextMessage

    public void writeToFileNIOWay(File file, String messageToWrite) throws IOException {
        System.out.println(“Data Location:”+file+”            Size:”+messageToWrite.length());
        //synchronized (this){

          byte[] messageBytes = messageToWrite.getBytes();
          RandomAccessFile raf = new RandomAccessFile(file, “rw”);
          raf.seek(raf.length());
          FileChannel fc = raf.getChannel();
          MappedByteBuffer mbf = fc.map(FileChannel.MapMode.READ_WRITE, fc.position(), messageBytes.length);
          mbf.put(messageBytes);
         fc.close();
        //}


    }//end of method

 }

Xuggler Java

Xuggler Java Implementation

Xuggler is the easy way to uncompress, modify, and re-compress any media file (or stream) from Java. If you’re a Java Developer who needs to programatically manipulate video files, either pre-recorded, or live, then Xuggler is for you.

Download SDK.

Program 1: Video Generator
package com.javacodegeeks.xuggler;

import java.awt.Dimension;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import java.io.File;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;

import javax.imageio.ImageIO;

import com.xuggle.mediatool.IMediaWriter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.xuggler.IAudioSamples;
import com.xuggle.xuggler.ICodec;
import com.xuggle.xuggler.IContainer;
import com.xuggle.xuggler.IPacket;
import com.xuggle.xuggler.IStream;
import com.xuggle.xuggler.IStreamCoder;
import com.xuggle.xuggler.IVideoPicture;

public class VideoGenerator {
    private static final String imagePath = “C:/Users/arsingh/Desktop/tempo/video”;
    private static final String onlyVideoFile = “C:/Users/arsingh/Desktop/tempo/imageVideo.webm”;
    private static final String audioFile = “C:/Users/arsingh/Desktop/tempo/audioFile/audio.wmv”; // audio
                                                                                                    // file
                                                                                                    // on
                                                                                                    // your
                                                                                                    // disk
    private static final String finalVideoFile = “C:/Users/arsingh/Desktop/tempo/finalVideo.mp4”;

    private static Dimension screenBounds;
    private static Map<String, File> imageMap = new HashMap<String, File>();

    public static void main(String[] args) {
        screenBounds = Toolkit.getDefaultToolkit().getScreenSize();
        // creteImageVideo();

        String audioVideoFile = mergeAudioVideo(onlyVideoFile, audioFile, finalVideoFile);
        // System.out.println(“Audio Video Created in flv”);

        // doWork(audioVideoFile,finalVideoFile);
        System.out.println(“Video converted to MP4”);
    }

    public static void creteImageVideo() {
        final IMediaWriter writer = ToolFactory.makeWriter(onlyVideoFile);

        writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_VP8, screenBounds.width / 2, screenBounds.height / 2);

        File folder = new File(imagePath);
        File[] listOfFiles = folder.listFiles();

        int indexVal = 0;
        for (File file : listOfFiles) {
            if (file.isFile()) {
                indexVal++;
                System.out.println(“file.getName() :” + file.getName());
                imageMap.put(file.getName(), file);
            }
        }

        // for (int index = 0; index < SECONDS_TO_RUN_FOR * FRAME_RATE; index++)
        // {
        for (int index = 1; index <= listOfFiles.length; index++) {
            BufferedImage screen = getImage(index);
            // BufferedImage bgrScreen = convertToType(screen,
            // BufferedImage.TYPE_3BYTE_BGR);
            BufferedImage bgrScreen = convertToType(screen, BufferedImage.TYPE_3BYTE_BGR);
            writer.encodeVideo(0, bgrScreen, 300 * index, TimeUnit.MILLISECONDS);

        }
        // tell the writer to close and write the trailer if needed
        writer.close();
        System.out.println(“Image Video Created”);
    }

    public static BufferedImage convertToType(BufferedImage sourceImage, int targetType) {
        BufferedImage image;
        if (sourceImage.getType() == targetType) {
            image = sourceImage;
        } else {
            image = new BufferedImage(sourceImage.getWidth(), sourceImage.getHeight(), targetType);
            image.getGraphics().drawImage(sourceImage, 0, 0, null);
        }
        return image;
    }

    private static BufferedImage getImage(int index) {

        try {
            String fileName = index + “.jpg”;
            System.out.println(“fileName :” + fileName);
            File img = imageMap.get(fileName);

            BufferedImage in = null;
            if (img != null) {
                System.out.println(“img :” + img.getName());
                in = ImageIO.read(img);
            } else {
                System.out.println(“++++++++++++++++++++++++++++++++++++++index :” + index);
                img = imageMap.get(1);
                in = ImageIO.read(img);
            }
            return in;

        }

        catch (Exception e) {

            e.printStackTrace();

            return null;

        }

    }

    public static String mergeAudioVideo(String filenamevideo, String filenameaudio, String outputFile) {

        // String filenamevideo = “f:/testvidfol/video.mp4”; //this is the input
        // file for video. you can change extension
        // String filenameaudio = “f:/testvidfol/audio.wav”; //this is the input
        // file for audio. you can change extension

        IMediaWriter mWriter = ToolFactory.makeWriter(outputFile); // output
                                                                    // file

        IContainer containerVideo = IContainer.make();
        IContainer containerAudio = IContainer.make();

        if (containerVideo.open(filenamevideo, IContainer.Type.READ, null) < 0)
            throw new IllegalArgumentException(“Cant find ” + filenamevideo);

        if (containerAudio.open(filenameaudio, IContainer.Type.READ, null) < 0)
            throw new IllegalArgumentException(“Cant find ” + filenameaudio);

        int numStreamVideo = containerVideo.getNumStreams();
        int numStreamAudio = containerAudio.getNumStreams();

        System.out.println(“Number of video streams: ” + numStreamVideo + “\n” + “Number of audio streams: ” + numStreamAudio);

        int videostreamt = -1; // this is the video stream id
        int audiostreamt = -1;

        IStreamCoder videocoder = null;

        for (int i = 0; i < numStreamVideo; i++) {
            IStream stream = containerVideo.getStream(i);
            IStreamCoder code = stream.getStreamCoder();

            if (code.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
                videostreamt = i;
                videocoder = code;
                break;
            }

        }

        for (int i = 0; i < numStreamAudio; i++) {
            IStream stream = containerAudio.getStream(i);
            IStreamCoder code = stream.getStreamCoder();

            if (code.getCodecType() == ICodec.Type.CODEC_TYPE_AUDIO) {
                audiostreamt = i;
                break;
            }

        }

        if (videostreamt == -1)
            throw new RuntimeException(“No video steam found”);
        if (audiostreamt == -1)
            throw new RuntimeException(“No audio steam found”);

        if (videocoder.open() < 0)
            throw new RuntimeException(“Cant open video coder”);
        IPacket packetvideo = IPacket.make();

        IStreamCoder audioCoder = containerAudio.getStream(audiostreamt).getStreamCoder();

        if (audioCoder.open() < 0)
            throw new RuntimeException(“Cant open audio coder”);
        mWriter.addAudioStream(1, 1, audioCoder.getChannels(), audioCoder.getSampleRate());

        mWriter.addVideoStream(0, 0, videocoder.getWidth(), videocoder.getHeight());
        // mWriter.addVideoStream(0, 0,
        // ICodec.ID.CODEC_ID_MPEG4,screenBounds.width / 2, screenBounds.height
        // / 2);
        // mWriter.addVideoStream(0, 0,
        // ICodec.ID.CODEC_ID_VP8,screenBounds.width / 2, screenBounds.height /
        // 2);

        IPacket packetaudio = IPacket.make();

        while (containerVideo.readNextPacket(packetvideo) >= 0 || containerAudio.readNextPacket(packetaudio) >= 0) {

            if (packetvideo.getStreamIndex() == videostreamt) {

                // video packet
                //IVideoPicture picture = IVideoPicture.make(videocoder.getPixelType(), videocoder.getWidth(), videocoder.getHeight());
                IVideoPicture picture = IVideoPicture.make(videocoder.getPixelType(), screenBounds.width * 2, screenBounds.height * 2);
                int offset = 0;
                while (offset < packetvideo.getSize()) {
                    int bytesDecoded = videocoder.decodeVideo(picture, packetvideo, offset);
                    if (bytesDecoded < 0)
                        throw new RuntimeException(“bytesDecoded not working”);
                    offset += bytesDecoded;

                    if (picture.isComplete()) {
                        System.out.println(“picture.getPixelType() :” + picture.getPixelType());
                    //    mWriter.encodeVideo(0, picture);

                    }
                }
            }

            if (packetaudio.getStreamIndex() == audiostreamt) {
                // audio packet

                IAudioSamples samples = IAudioSamples.make(512, audioCoder.getChannels(), IAudioSamples.Format.FMT_S32);
                int offset = 0;
                while (offset < packetaudio.getSize()) {
                    int bytesDecodedaudio = audioCoder.decodeAudio(samples, packetaudio, offset);
                    if (bytesDecodedaudio < 0)
                        throw new RuntimeException(“could not detect audio”);
                    offset += bytesDecodedaudio;

                    if (samples.isComplete()) {
                        mWriter.encodeAudio(1, samples);

                    }
                }
            }
        }

        mWriter.close();
        return outputFile;
    }

}

Program 2: Video Merger
package com.javacodegeeks.xuggler;
import static java.lang.System.out;

import com.xuggle.mediatool.IMediaReader;
import com.xuggle.mediatool.IMediaViewer;
import com.xuggle.mediatool.IMediaWriter;
import com.xuggle.mediatool.MediaToolAdapter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.mediatool.event.AudioSamplesEvent;
import com.xuggle.mediatool.event.IAddStreamEvent;
import com.xuggle.mediatool.event.IAudioSamplesEvent;
import com.xuggle.mediatool.event.ICloseCoderEvent;
import com.xuggle.mediatool.event.ICloseEvent;
import com.xuggle.mediatool.event.IOpenCoderEvent;
import com.xuggle.mediatool.event.IOpenEvent;
import com.xuggle.mediatool.event.IVideoPictureEvent;
import com.xuggle.mediatool.event.VideoPictureEvent;
import com.xuggle.xuggler.IAudioSamples;
import com.xuggle.xuggler.IVideoPicture;
//import com.xuggle.mediatool.IMediaReader;

/**
 * A very simple media transcoder which uses {@link IMediaReader}, {@link
 * IMediaWriter} and {@link IMediaViewer}.
 */

public class MergingVideos
{
  /**
   * Concatenate two files.
   *
   * @param args 3 strings; an input file 1, input file 2, and an output file.
   */

  public static void main(String[] args)
  {

   String file1=”C:/Users/arsingh/Desktop/AnupData/myvideo1.mp4″;//change accordingly
      String file2=”C:/Users/arsingh/Desktop/AnupData/myvideo2.mp4″;//change accordingly
      String file3=”C:/Users/arsingh/Desktop/AnupData/myvideo1.mp4″;//change accordingly
  // String file3=”/home/naveen/workspace/video/s4.mp4″;
  // String mergefile = “/home/naveen/workspace/converted/threefile.mp4”;
      String mergefile = “C:/Users/arsingh/Desktop/AnupData/myvideo12.mp4”;//change accordingly
   concatenateThreeFiles(file1,file2,file3,mergefile);
    //  concatenate(file1,file2,mergefile);

  }


  public static void concatenateThreeFiles(String sourceUrl1, String sourceUrl2,String sourceUrl3,String destinationUrl)
  {
   System.out.println(“transcoding starts”);

   //video parameters
   final int videoStreamIndex = 0;
   final int videoStreamId = 0;
   final int width = 480 ;
   final int height = 272;

   //audio parameters

   final int audioStreamIndex = 1;
   final int audioStreamId = 0;
   final int channelCount = 2;
   final int sampleRate = 44100; //Hz

   IMediaReader reader1 = ToolFactory.makeReader(sourceUrl1);
   IMediaReader reader2 = ToolFactory.makeReader(sourceUrl2);
   IMediaReader reader3 = ToolFactory.makeReader(sourceUrl3);

   MediaConcatenator concatenator = new MediaConcatenator(audioStreamIndex,videoStreamIndex);
   reader1.addListener(concatenator);
   reader2.addListener(concatenator);
   reader3.addListener(concatenator);

   IMediaWriter writer = ToolFactory.makeWriter(destinationUrl);
   concatenator.addListener(writer);
   writer.addVideoStream(videoStreamIndex, videoStreamId, width,height);
   writer.addAudioStream(audioStreamIndex, audioStreamId, channelCount, sampleRate);

   while(reader1.readPacket() == null);

   while(reader2.readPacket() == null);

   while(reader3.readPacket() == null);

    writer.close();
   System.out.println(“\nfinished merging”);
  }

  /**
   * Concatenate two source files into one destination file.
   *
   * @param sourceUrl1 the file which will appear first in the output
   * @param sourceUrl2 the file which will appear second in the output
   * @param destinationUrl the file which will be produced
   */

  public static void concatenate(String sourceUrl1, String sourceUrl2,String destinationUrl)
  {
    out.printf(“\ntranscode %s + %s -> %s/n”, sourceUrl1, sourceUrl2,
      destinationUrl);

    //////////////////////////////////////////////////////////////////////
    //                                                                  //
    // NOTE: be sure that the audio and video parameters match those of //
    // your input media                                                 //
    //                                                                  //
    //////////////////////////////////////////////////////////////////////

    // video parameters

    final int videoStreamIndex = 0;
    final int videoStreamId = 0;
    final int width = 480;
    final int height = 272;

    // audio parameters

    //commented by vivek
    final int audioStreamIndex = 1;
    final int audioStreamId = 0;
    final int channelCount = 2;
    final int sampleRate = 44100; // Hz

    // create the first media reader

    IMediaReader reader1 = ToolFactory.makeReader(sourceUrl1);

    // create the second media reader

    IMediaReader reader2 = ToolFactory.makeReader(sourceUrl2);

    // create the media concatenator

    MediaConcatenator concatenator = new MediaConcatenator(audioStreamIndex,
      videoStreamIndex);

    // concatenator listens to both readers

    reader1.addListener(concatenator);
    reader2.addListener(concatenator);

    // create the media writer which listens to the concatenator

    IMediaWriter writer = ToolFactory.makeWriter(destinationUrl);
    concatenator.addListener(writer);

    // add the video stream

    writer.addVideoStream(videoStreamIndex, videoStreamId, width, height);

    // add the audio stream

  //  writer.addAudioStream(audioStreamIndex, audioStreamId, channelCount,sampleRate);

    // read packets from the first source file until done

    while (reader1.readPacket() != null)
    {
        //writer.onWritePacket(reader1.readPacket());
    }

    // read packets from the second source file until done

    while (reader2.readPacket() != null)
    {

    }

    // close the writer

   writer.flush();
    System.out.println(“\nfinish”);
  }

  static class MediaConcatenator extends MediaToolAdapter
  {
    // the current offset

    private long mOffset = 0;

    // the next video timestamp

    private long mNextVideo = 0;

    // the next audio timestamp

    private long mNextAudio = 0;

    // the index of the audio stream

    private final int mAudoStreamIndex;

    // the index of the video stream

    private final int mVideoStreamIndex;

    /**
     * Create a concatenator.
     *
     * @param audioStreamIndex index of audio stream
     * @param videoStreamIndex index of video stream
     */

    public MediaConcatenator(int audioStreamIndex, int videoStreamIndex)
    {
      mAudoStreamIndex = audioStreamIndex;
      mVideoStreamIndex = videoStreamIndex;
    }

    public void onAudioSamples(IAudioSamplesEvent event)
    {
      IAudioSamples samples = event.getAudioSamples();

      // set the new time stamp to the original plus the offset established
      // for this media file

      long newTimeStamp = samples.getTimeStamp() + mOffset;

      // keep track of predicted time of the next audio samples, if the end
      // of the media file is encountered, then the offset will be adjusted
      // to this time.

      mNextAudio = samples.getNextPts();

      // set the new timestamp on audio samples

      samples.setTimeStamp(newTimeStamp);

      // create a new audio samples event with the one true audio stream
      // index

      super.onAudioSamples(new AudioSamplesEvent(this, samples,
        mAudoStreamIndex));
    }

    public void onVideoPicture(IVideoPictureEvent event)
    {
      IVideoPicture picture = event.getMediaData();
      long originalTimeStamp = picture.getTimeStamp();

      // set the new time stamp to the original plus the offset established
      // for this media file

      long newTimeStamp = originalTimeStamp + mOffset;

      // keep track of predicted time of the next video picture, if the end
      // of the media file is encountered, then the offset will be adjusted
      // to this this time.
      //
      // You’ll note in the audio samples listener above we used
      // a method called getNextPts().  Video pictures don’t have
      // a similar method because frame-rates can be variable, so
      // we don’t now.  The minimum thing we do know though (since
      // all media containers require media to have monotonically
      // increasing time stamps), is that the next video timestamp
      // should be at least one tick ahead.  So, we fake it.

      mNextVideo = originalTimeStamp + 1;

      // set the new timestamp on video samples

      picture.setTimeStamp(newTimeStamp);

      // create a new video picture event with the one true video stream
      // index

      super.onVideoPicture(new VideoPictureEvent(this, picture,
        mVideoStreamIndex));
    }

    public void onClose(ICloseEvent event)
    {
      // update the offset by the larger of the next expected audio or video
      // frame time

      mOffset = Math.max(mNextVideo, mNextAudio);

      if (mNextAudio < mNextVideo)
      {
        // In this case we know that there is more video in the
        // last file that we read than audio. Technically you
        // should pad the audio in the output file with enough
        // samples to fill that gap, as many media players (e.g.
        // Quicktime, Microsoft Media Player, MPlayer) actually
        // ignore audio time stamps and just play audio sequentially.
        // If you don’t pad, in those players it may look like
        // audio and video is getting out of sync.

        // However kiddies, this is demo code, so that code
        // is left as an exercise for the readers. As a hint,
        // see the IAudioSamples.defaultPtsToSamples(…) methods.
      }
    }

    public void onAddStream(IAddStreamEvent event)
    {
      // overridden to ensure that add stream events are not passed down
      // the tool chain to the writer, which could cause problems
    }

    public void onOpen(IOpenEvent event)
    {
      // overridden to ensure that open events are not passed down the tool
      // chain to the writer, which could cause problems
    }

    public void onOpenCoder(IOpenCoderEvent event)
    {
      // overridden to ensure that open coder events are not passed down the
      // tool chain to the writer, which could cause problems
    }

    public void onCloseCoder(ICloseCoderEvent event)
    {
      // overridden to ensure that close coder events are not passed down the
      // tool chain to the writer, which could cause problems
    }
  }
}

Program 3: Video Info
package com.javacodegeeks.xuggler;

import com.xuggle.xuggler.ICodec;
import com.xuggle.xuggler.IContainer;
import com.xuggle.xuggler.IStream;
import com.xuggle.xuggler.IStreamCoder;

public class VideoInfo {

     private static final String filename1 = “C:/Users/arsingh/Desktop/tempo/ww.webm”;
     //private static final String filename2 = “C:/Users/arsingh/Desktop/AnupData/myvideo2.mp4”;
   // private static final String filenameOutput = “C:/Users/arsingh/Desktop/AnupData/myvideo2.mp4”;

    public static void main(String[] args) {

        // first we create a Xuggler container object
        IContainer container = IContainer.make();
    //    IContainer container2 = IContainer.make();
        IContainer containerOutput = IContainer.make();
        // we attempt to open up the container
        int result1 = container.open(filename1, IContainer.Type.READ, null);
      //  int result2 = container2.open(filename2, IContainer.Type.READ, null);
        int resultOutput = containerOutput.open(filename1, IContainer.Type.WRITE, null);

        // check if the operation was successful
        if (result1<0)
            throw new RuntimeException(“Failed to open media file 1”);
        /*if (result2<0)
            throw new RuntimeException(“Failed to open media file 2”);
        */

        // query how many streams the call to open found
        int numStreams = container.getNumStreams();
       // int numStreams2 = container2.getNumStreams();

        // query for the total duration
        long duration = container.getDuration();

        // query for the file size
        long fileSize = container.getFileSize();

        // query for the bit rate
        long bitRate = container.getBitRate();

        System.out.println(“Number of streams: ” + numStreams);
        System.out.println(“Duration (ms): ” + duration);
        System.out.println(“File Size (bytes): ” + fileSize);
        System.out.println(“Bit Rate: ” + bitRate);

        // iterate through the streams to print their meta data
        for (int i=0; i<numStreams; i++) {

            // find the stream object
            IStream stream = container.getStream(i);

            // get the pre-configured decoder that can decode this stream;
            IStreamCoder coder = stream.getStreamCoder();
            containerOutput.addNewStream(coder);

            System.out.println(“*** Start of Stream Info ***”);

            System.out.printf(“stream %d: “, i);
            System.out.printf(“type: %s; “, coder.getCodecType());
            System.out.printf(“codec: %s; “, coder.getCodecID());
            System.out.printf(“duration: %s; “, stream.getDuration());
            System.out.printf(“start time: %s; “, container.getStartTime());
            System.out.printf(“timebase: %d/%d; “,
                 stream.getTimeBase().getNumerator(),
                 stream.getTimeBase().getDenominator());
            System.out.printf(“coder tb: %d/%d; “,
                 coder.getTimeBase().getNumerator(),
                 coder.getTimeBase().getDenominator());
            System.out.println();

            if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_AUDIO) {
                System.out.printf(“sample rate: %d; “, coder.getSampleRate());
                System.out.printf(“channels: %d; “, coder.getChannels());
                System.out.printf(“format: %s”, coder.getSampleFormat());
            }
            else if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
                System.out.printf(“width: %d; “, coder.getWidth());
                System.out.printf(“height: %d; “, coder.getHeight());
                System.out.printf(“format: %s; “, coder.getPixelType());
                System.out.printf(“frame-rate: %5.2f; “, coder.getFrameRate().getDouble());
            }

            System.out.println();
            System.out.println(“*** End of Stream Info ***”);

        }
        containerOutput.close();

    }

}

Multicasting in UDP

Multicasting using DatagramSocket by MultiThreading

//QuoteServerThread.java notepad
import java.io.*;

import java.net.*;

import java.util.*;


public class QuoteServerThread extends Thread {


protected DatagramSocket socket = null;

protected BufferedReader in = null;

 protected boolean moreQuotes = true;


public QuoteServerThread() throws IOException {

this(“QuoteServerThread”);

}


public QuoteServerThread(String name) throws IOException {

super(name);

socket = new DatagramSocket(4445);


try {

in = new BufferedReader(new FileReader(“us.java”));

} catch (FileNotFoundException e) {

            System.err.println(“Could not open quote file. Serving time instead.”);

 }
    }


 public void run() {


while (moreQuotes) {

try {
                byte[] buf = new byte[256];

                    // receive request
                DatagramPacket packet = new DatagramPacket(buf, buf.length);

socket.receive(packet);

                    // figure out response

String dString = null;

if (in == null)

dString = new Date().toString();

else

dString = getNextQuote();

buf = dString.getBytes();


// send the response to the client at “address” and “port”

InetAddress address = packet.getAddress();

int port = packet.getPort();

packet = new DatagramPacket(buf, buf.length, address, port);

socket.send(packet);

} catch (IOException e) {

e.printStackTrace();

moreQuotes = false;
            }
        }

 socket.close();
    }


protected String getNextQuote() {

String returnValue = null;

try {

if ((returnValue = in.readLine()) == null) {

in.close();
        moreQuotes = false;

returnValue = “No more quotes. Goodbye.”;

}
        }
catch (IOException e) {


returnValue = “IOException occurred in server.”;

}

return returnValue;

}
}

2)
//MulticastServerThread.java notepad


import java.io.*;

import java.net.*;

import java.util.*;


public class MulticastServerThread extends QuoteServerThread {


  private long FIVE_SECONDS = 5000;


public MulticastServerThread() throws IOException
 {
        super(“MulticastServerThread”);

  }


public void run() {

   while (moreQuotes) {

    try {

 byte[] buf = new byte[256];


  // construct quote

 String dString = null;

  if (in == null)

 dString = new Date().toString();

   else

 dString = getNextQuote();

     buf = dString.getBytes();


   // send it

 InetAddress group = InetAddress.getByName(“230.0.0.1”);

 DatagramPacket packet = new DatagramPacket(buf, buf.length, group, 4446);

             socket.send(packet);


  // sleep for a while

try {
            sleep((long)(Math.random() * FIVE_SECONDS));

    } catch (InterruptedException e) { }

          } catch (IOException e)
{
                e.printStackTrace();

moreQuotes = false;
            }

       }

socket.close();

}
}

3)
//MulticastServer.java notepad


public class MulticastServer
{
    public static void main(String[] args) throws java.io.IOException
 {

 new MulticastServerThread().start();

 }
}

//MulticastClient.java notepad



import java.io.*;
import java.net.*;

import java.util.*;


public class MulticastClient {


 public static void main(String[] args) throws IOException {


 MulticastSocket socket = new MulticastSocket(4446);

 InetAddress address = InetAddress.getByName(“230.0.0.1”);

socket.joinGroup(address);


 DatagramPacket packet;


 // get a few quotes

for (int i = 0; i < 5; i++) {


        byte[] buf = new byte[256];

           packet = new DatagramPacket(buf, buf.length);

          socket.receive(packet);


 String received = new String(packet.getData(), 0, packet.getLength());

           System.out.println(“Quote of the Moment: ” + received);

    }


socket.leaveGroup(address);

socket.close();

 }

}

/* 1 cmd promp->javac QuoteServerThread.java
                        >javac MulticastServer.java
                        >java MulticastServer
==>make sure change file name present in code of QuoteServerThread, thats present in ur pc
2cmd promp->javac MulticastClient.java
                    >java MulticastClient
*/