Recherche avancée

Médias (91)

Autres articles (73)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (9982)

  • Streaming mp4a to localhost using udp and ffmpeg

    2 août 2017, par noswoscar

    I am using the following command to stream a video and it’s audio to localhost :
    ffmpeg -re -i out.mp4 -map 0:0 -vcodec libx264 -f h264 udp ://127.0.0.1:1234 -map 0:1 -acodec libfaac -f mp4a udp ://127.0.0.1:2020

    FFmpeg is not recognising my audio codec and my audio format so I get the following error message :
    Error

    What audio format and codec do I need to use ? The codec information of the video I wish to send is as follows :
    Codecs used

    When I convert the audio track to mp3 I can run the above command and stream the video and audio properly. However I dont want to convert all my video audio-tracks to mp3.

    (I am confused by all the encoders, decoders, codec names in the ffmpeg documentation) Is there a way of finding the right encoder to use with the mp4a audio codec other than reading the whole list of codecs and options ?

    Thanks.

  • Decode mp3 using FFMpeg, Android NDK - What is wrong with my AVFormatContext ?

    27 février 2020, par michpohl

    I am trying to decode am MP3 file to a raw PCM stream using FFMpeg via JNI on Android. I have compiled the latest FFMpeg version (4.2) and added it to my app. This did not make any problems.
    The goal is to be able to use mp3 files from the device’s storage for playback with oboe

    Since I am relatively inexperienced with both C++ and FFMpeg, my approach is based upon this :
    oboe’s RhythmGame example

    I have based my FFMpegExtractorclass on the one found in the example here. With the help of StackOverflow the AAssetManageruse was removed and instead a MediaSource helper class now serves as a wrapper for my stream (see here)

    But unfortunately, creating the AVFormatContext doesn’t work right - and I can’t seem to understand why. Since I have very limited understanding of correct pointer usage and C++ memory management, I suspect it’s most likely I’m doing something wrong in that area. But honestly, I have no idea.

    This is my FFMpegExtractor.h :

    #define MYAPP_FFMPEGEXTRACTOR_H

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "MediaSource.cpp"


    class FFMpegExtractor {
    public:

       FFMpegExtractor();

       ~FFMpegExtractor();

       int64_t decode2(char *filepath, uint8_t *targetData, AudioProperties targetProperties);

    private:
       MediaSource *mSource;

       bool createAVFormatContext(AVIOContext *avioContext, AVFormatContext **avFormatContext);

       bool openAVFormatContext(AVFormatContext *avFormatContext);

       int32_t cleanup(AVIOContext *avioContext, AVFormatContext *avFormatContext);

       bool getStreamInfo(AVFormatContext *avFormatContext);

       AVStream *getBestAudioStream(AVFormatContext *avFormatContext);

       AVCodec *findCodec(AVCodecID id);

       void printCodecParameters(AVCodecParameters *params);

       bool createAVIOContext2(const std::string &amp;filePath, uint8_t *buffer, uint32_t bufferSize,
                               AVIOContext **avioContext);
    };


    #endif //MYAPP_FFMPEGEXTRACTOR_H
    </fstream></cstdint>

    This is FFMPegExtractor.cpp :

    #include <memory>
    #include <oboe></oboe>Definitions.h>
    #include "FFMpegExtractor.h"
    #include "logging.h"
    #include <fstream>

    FFMpegExtractor::FFMpegExtractor() {
       mSource = new MediaSource;
    }

    FFMpegExtractor::~FFMpegExtractor() {
       delete mSource;
    }

    constexpr int kInternalBufferSize = 1152; // Use MP3 block size. https://wiki.hydrogenaud.io/index.php?title=MP3

    /**
    * Reads from an IStream into FFmpeg.
    *
    * @param ptr       A pointer to the user-defined IO data structure.
    * @param buf       A buffer to read into.
    * @param buf_size  The size of the buffer buff.
    *
    * @return The number of bytes read into the buffer.
    */


    // If FFmpeg needs to read the file, it will call this function.
    // We need to fill the buffer with file's data.
    int read(void *opaque, uint8_t *buffer, int buf_size) {
       MediaSource *source = (MediaSource *) opaque;
       return source->read(buffer, buf_size);
    }

    // If FFmpeg needs to seek in the file, it will call this function.
    // We need to change the read pos.
    int64_t seek(void *opaque, int64_t offset, int whence) {
       MediaSource *source = (MediaSource *) opaque;
       return source->seek(offset, whence);
    }


    // Create and save a MediaSource instance.
    bool FFMpegExtractor::createAVIOContext2(const std::string &amp;filepath, uint8_t *buffer, uint32_t bufferSize,
                                            AVIOContext **avioContext) {

       mSource = new MediaSource;
       mSource->open(filepath);
       constexpr int isBufferWriteable = 0;

       *avioContext = avio_alloc_context(
               buffer, // internal buffer for FFmpeg to use
               bufferSize, // For optimal decoding speed this should be the protocol block size
               isBufferWriteable,
               mSource, // Will be passed to our callback functions as a (void *)
               read, // Read callback function
               nullptr, // Write callback function (not used)
               seek); // Seek callback function

       if (*avioContext == nullptr) {
           LOGE("Failed to create AVIO context");
           return false;
       } else {
           return true;
       }
    }

    bool
    FFMpegExtractor::createAVFormatContext(AVIOContext *avioContext,
                                          AVFormatContext **avFormatContext) {

       *avFormatContext = avformat_alloc_context();
       (*avFormatContext)->pb = avioContext;

       if (*avFormatContext == nullptr) {
           LOGE("Failed to create AVFormatContext");
           return false;
       } else {
           LOGD("Successfully created AVFormatContext");
           return true;
       }
    }

    bool FFMpegExtractor::openAVFormatContext(AVFormatContext *avFormatContext) {

       int result = avformat_open_input(&amp;avFormatContext,
                                        "", /* URL is left empty because we're providing our own I/O */
                                        nullptr /* AVInputFormat *fmt */,
                                        nullptr /* AVDictionary **options */
       );

       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to open file. Error code %s", av_err2str(result));
           return false;
       }
    }

    bool FFMpegExtractor::getStreamInfo(AVFormatContext *avFormatContext) {

       int result = avformat_find_stream_info(avFormatContext, nullptr);
       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to find stream info. Error code %s", av_err2str(result));
           return false;
       }
    }

    AVStream *FFMpegExtractor::getBestAudioStream(AVFormatContext *avFormatContext) {

       int streamIndex = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);

       if (streamIndex &lt; 0) {
           LOGE("Could not find stream");
           return nullptr;
       } else {
           return avFormatContext->streams[streamIndex];
       }
    }

    int64_t FFMpegExtractor::decode2(
           char* filepath,
           uint8_t *targetData,
           AudioProperties targetProperties) {

       LOGD("Decode SETUP");
       int returnValue = -1; // -1 indicates error

       // Create a buffer for FFmpeg to use for decoding (freed in the custom deleter below)
       auto buffer = reinterpret_cast(av_malloc(kInternalBufferSize));


       // Create an AVIOContext with a custom deleter
       std::unique_ptr ioContext{
               nullptr,
               [](AVIOContext *c) {
                   av_free(c->buffer);
                   avio_context_free(&amp;c);
               }
       };
       {
           AVIOContext *tmp = nullptr;
           if (!createAVIOContext2(filepath, buffer, kInternalBufferSize, &amp;tmp)) {
               LOGE("Could not create an AVIOContext");
               return returnValue;
           }
           ioContext.reset(tmp);
       }
       // Create an AVFormatContext using the avformat_free_context as the deleter function
       std::unique_ptr formatContext{
               nullptr,
               &amp;avformat_free_context
       };

       {
           AVFormatContext *tmp;
           if (!createAVFormatContext(ioContext.get(), &amp;tmp)) return returnValue;
           formatContext.reset(tmp);
       }
       if (!openAVFormatContext(formatContext.get())) return returnValue;
       LOGD("172");

       if (!getStreamInfo(formatContext.get())) return returnValue;
       LOGD("175");

       // Obtain the best audio stream to decode
       AVStream *stream = getBestAudioStream(formatContext.get());
       if (stream == nullptr || stream->codecpar == nullptr) {
           LOGE("Could not find a suitable audio stream to decode");
           return returnValue;
       }
       LOGD("183");

       printCodecParameters(stream->codecpar);

       // Find the codec to decode this stream
       AVCodec *codec = avcodec_find_decoder(stream->codecpar->codec_id);
       if (!codec) {
           LOGE("Could not find codec with ID: %d", stream->codecpar->codec_id);
           return returnValue;
       }

       // Create the codec context, specifying the deleter function
       std::unique_ptr codecContext{
               nullptr,
               [](AVCodecContext *c) { avcodec_free_context(&amp;c); }
       };
       {
           AVCodecContext *tmp = avcodec_alloc_context3(codec);
           if (!tmp) {
               LOGE("Failed to allocate codec context");
               return returnValue;
           }
           codecContext.reset(tmp);
       }

       // Copy the codec parameters into the context
       if (avcodec_parameters_to_context(codecContext.get(), stream->codecpar) &lt; 0) {
           LOGE("Failed to copy codec parameters to codec context");
           return returnValue;
       }

       // Open the codec
       if (avcodec_open2(codecContext.get(), codec, nullptr) &lt; 0) {
           LOGE("Could not open codec");
           return returnValue;
       }

       // prepare resampler
       int32_t outChannelLayout = (1 &lt;&lt; targetProperties.channelCount) - 1;
       LOGD("Channel layout %d", outChannelLayout);

       SwrContext *swr = swr_alloc();
       av_opt_set_int(swr, "in_channel_count", stream->codecpar->channels, 0);
       av_opt_set_int(swr, "out_channel_count", targetProperties.channelCount, 0);
       av_opt_set_int(swr, "in_channel_layout", stream->codecpar->channel_layout, 0);
       av_opt_set_int(swr, "out_channel_layout", outChannelLayout, 0);
       av_opt_set_int(swr, "in_sample_rate", stream->codecpar->sample_rate, 0);
       av_opt_set_int(swr, "out_sample_rate", targetProperties.sampleRate, 0);
       av_opt_set_int(swr, "in_sample_fmt", stream->codecpar->format, 0);
       av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_FLT, 0);
       av_opt_set_int(swr, "force_resampling", 1, 0);

       // Check that resampler has been inited
       int result = swr_init(swr);
       if (result != 0) {
           LOGE("swr_init failed. Error: %s", av_err2str(result));
           return returnValue;
       };
       if (!swr_is_initialized(swr)) {
           LOGE("swr_is_initialized is false\n");
           return returnValue;
       }

       // Prepare to read data
       int bytesWritten = 0;
       AVPacket avPacket; // Stores compressed audio data
       av_init_packet(&amp;avPacket);
       AVFrame *decodedFrame = av_frame_alloc(); // Stores raw audio data
       int bytesPerSample = av_get_bytes_per_sample((AVSampleFormat) stream->codecpar->format);

       LOGD("Bytes per sample %d", bytesPerSample);

       // While there is more data to read, read it into the avPacket
       while (av_read_frame(formatContext.get(), &amp;avPacket) == 0) {

           if (avPacket.stream_index == stream->index) {

               while (avPacket.size > 0) {
                   // Pass our compressed data into the codec
                   result = avcodec_send_packet(codecContext.get(), &amp;avPacket);
                   if (result != 0) {
                       LOGE("avcodec_send_packet error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // Retrieve our raw data from the codec
                   result = avcodec_receive_frame(codecContext.get(), decodedFrame);
                   if (result != 0) {
                       LOGE("avcodec_receive_frame error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // DO RESAMPLING
                   auto dst_nb_samples = (int32_t) av_rescale_rnd(
                           swr_get_delay(swr, decodedFrame->sample_rate) + decodedFrame->nb_samples,
                           targetProperties.sampleRate,
                           decodedFrame->sample_rate,
                           AV_ROUND_UP);

                   short *buffer1;
                   av_samples_alloc(
                           (uint8_t **) &amp;buffer1,
                           nullptr,
                           targetProperties.channelCount,
                           dst_nb_samples,
                           AV_SAMPLE_FMT_FLT,
                           0);
                   int frame_count = swr_convert(
                           swr,
                           (uint8_t **) &amp;buffer1,
                           dst_nb_samples,
                           (const uint8_t **) decodedFrame->data,
                           decodedFrame->nb_samples);

                   int64_t bytesToWrite = frame_count * sizeof(float) * targetProperties.channelCount;
                   memcpy(targetData + bytesWritten, buffer1, (size_t) bytesToWrite);
                   bytesWritten += bytesToWrite;
                   av_freep(&amp;buffer1);

                   avPacket.size = 0;
                   avPacket.data = nullptr;
               }
           }
       }

       av_frame_free(&amp;decodedFrame);

       returnValue = bytesWritten;

       cleanup:
       return returnValue;
    }

    void FFMpegExtractor::printCodecParameters(AVCodecParameters *params) {

       LOGD("Stream properties");
       LOGD("Channels: %d", params->channels);
       LOGD("Channel layout: %"
                    PRId64, params->channel_layout);
       LOGD("Sample rate: %d", params->sample_rate);
       LOGD("Format: %s", av_get_sample_fmt_name((AVSampleFormat) params->format));
       LOGD("Frame size: %d", params->frame_size);
    }
    </fstream></memory>

    And this is the MediaSource.cpp :

    #ifndef MYAPP_MEDIASOURCE_CPP
    #define MYAPP_MEDIASOURCE_CPP

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "logging.h"

    // wrapper class for file stream
    class MediaSource {
    public:

       MediaSource() {
       }

       ~MediaSource() {
           source.close();
       }

       void open(const std::string &amp;filePath) {
           const char *x = filePath.c_str();
           LOGD("Opened %s", x);
           source.open(filePath, std::ios::in | std::ios::binary);
       }

       int read(uint8_t *buffer, int buf_size) {
           // read data to buffer
           source.read((char *) buffer, buf_size);
           // return how many bytes were read
           return source.gcount();
       }

       int64_t seek(int64_t offset, int whence) {
           if (whence == AVSEEK_SIZE) {
               // FFmpeg needs file size.
               int oldPos = source.tellg();
               source.seekg(0, std::ios::end);
               int64_t length = source.tellg();
               // seek to old pos
               source.seekg(oldPos);
               return length;
           } else if (whence == SEEK_SET) {
               // set pos to offset
               source.seekg(offset);
           } else if (whence == SEEK_CUR) {
               // add offset to pos
               source.seekg(offset, std::ios::cur);
           } else {
               // do not support other flags, return -1
               return -1;
           }
           // return current pos
           return source.tellg();
       }

    private:
       std::ifstream source;
    };

    #endif //MYAPP_MEDIASOURCE_CPP
    </fstream></cstdint>

    When the code is executed, I can see that I submit the correct file path, so I assume the resource mp3 is there.
    When this code is executed the app crashes in line 103 of FFMpegExtractor.cpp, at formatContext.reset(tmp);

    This is what Android Studio logs when the app crashes :

    --------- beginning of crash
    2020-02-27 14:31:26.341 9852-9945/com.user.myapp A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x7fffffff0 in tid 9945 (chaelpohl.loopy), pid 9852 (user.myapp)

    This is the (sadly very short) output I get with ndk-stack :

    ********** Crash dump: **********
    Build fingerprint: 'samsung/dreamltexx/dreamlte:9/PPR1.180610.011/G950FXXU6DSK9:user/release-keys'
    #00 0x0000000000016c50 /data/app/com.user.myapp-D7dBCgHF-vdQNNSald4lWA==/lib/arm64/libavformat.so (avformat_free_context+260)
                                                                                                            avformat_free_context
                                                                                                            ??:0:0
    Crash dump is completed

    I tested a bit around, and every call to my formatContext crashes the app. So I assume there is something wrong with the input I provide to build it but I have no clue how to debug this.

    Any help is appreciated ! (Happy to provide additional resources if something crucial is missing).

  • Error while opening encoder for output stream #0:0 for Creating Video from image,Gif,music

    29 février 2020, par brijesh

    I am trying to create a video from image,gif and music.

    Here is the code I used :

    {"-y", "-i", imagepath, "-ignore_loop", "0", "-i", gif, "-filter_complex", "[1:v]scale=" + filterdBitmap.getWidth() + ":" + filterdBitmap.getHeight() + "[ovrl];[0:v][ovrl]overlay=0:0", "-ss", "" + startMs / 1000, "-t", "" + endMs / 1000, "-i", songpath, "-c:v", "libx264", "-preset", "ultrafast", "-r", "30", "-pix_fmt", "yuva420p", "-c:a", "aac", "-shortest", outputLocation.getPath()};

    The error I received was this :

    Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

    Specifically, here is the complete response :

    2020-02-29 10:16:01.043 14913-14913/com.photocreator E/fail: ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
         built with gcc 4.8 (GCC)
         configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
         libavutil      55. 17.103 / 55. 17.103
         libavcodec     57. 24.102 / 57. 24.102
         libavformat    57. 25.100 / 57. 25.100
         libavdevice    57.  0.101 / 57.  0.101
         libavfilter     6. 31.100 /  6. 31.100
         libswscale      4.  0.100 /  4.  0.100
         libswresample   2.  0.101 /  2.  0.101
         libpostproc    54.  0.100 / 54.  0.100
       Input #0, png_pipe, from 'file:///storage/emulated/0/1582951553006.jpg':
         Duration: N/A, bitrate: N/A
           Stream #0:0: Video: png, rgba(pc), 639x812, 25 tbr, 25 tbn, 25 tbc
       Input #1, gif, from 'http://13.232.145.224:3003/getpath/video_maker/new/35.gif':
         Duration: N/A, bitrate: N/A
           Stream #1:0: Video: gif, bgra, 288x480, 15 fps, 15 tbr, 100 tbn, 100 tbc
       [mp3 @ 0xaea97200] Skipping 0 bytes of junk at 253.
       Input #2, mp3, from '/storage/emulated/0/Download/supnaringtone-49332.mp3':
         Metadata:
           encoder         : Lavf58.20.100
         Duration: 00:00:21.76, start: 0.025057, bitrate: 64 kb/s
           Stream #2:0: Audio: mp3, 44100 Hz, stereo, s16p, 64 kb/s
           Metadata:
             encoder         : Lavc58.35
       Incompatible pixel format 'yuva420p' for codec 'libx264', auto-selecting format 'yuv420p'
       [libx264 @ 0xaeacfc00] width not divisible by 2 (639x812)
       Output #0, mp4, to '/storage/emulated/0/allkotlin/video/movie_1582951554388.mp4':
           Stream #0:0: Video: h264, none, q=2-31, 128 kb/s, 30 fps (default)
           Metadata:
             encoder         : Lavc57.24.102 libx264
           Stream #0:1: Audio: aac, 0 channels, 128 kb/s
           Metadata:
             encoder         : Lavc57.24.102 aac
       Stream mapping:
         Stream #0:0 (png) -> overlay:main (graph 0)
         Stream #1:0 (gif) -> scale (graph 0)
         overlay (graph 0) -> Stream #0:0 (libx264)
         Stream #2:0 -> #0:1 (mp3 (native) -> aac (native))
       Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

    If I use -s 560x560 , it works great, except then I can’t keep on using my aspect ratio :

    And here is my code :

    public class PhotoEditing extends AppCompatActivity implements GetGifAdapter.GlideInterface, SdCardSongAdapter.MediaInterface
           , FiltersListFragmentListener, EmojiFragmentListener, AddTextFragmentListener {

       public static final String FILE_PROVIDER_AUTHORITY = "com.burhanrashid52.photoeditor.fileprovider";
       @Nullable
       @VisibleForTesting
       Uri mSaveImageUri;
       PhotoEditorView image_preview;
       ImageView image_gif/*,image_preview*/;
       ImageView save, back;
       LinearLayout linearLayout;
       public String sessionId, sessionId1;
       Uri image_selected_uri;

       public Bitmap originalBitmap, filterdBitmap, finalBitmap;
       LinearLayout btn_music_list, btn_music_cut, btn_add_gif, btn_filters_list, btn_emoji, btn_add_text;
       MediaPlayer mediaPlayer;
       String mediaData;
       LinearLayout relativeLayout;
       RelativeLayout seekbar_layout, fm;
       RelativeLayout.LayoutParams layoutparam;
       RangeSeekBar rangeSeekBar;
       Runnable r;
       Handler mHandler;
       private int duration;
       private TextView tvLeft, tvRight;
       RelativeLayout rl_replace, music_fragment;
       FilterListFragment filterListFragment;
       EmojiFragment emojiFragment;
       PhotoEditor photoEditor;
       int screenWidth, screenHeight;
       Bitmap bitmap;
       int brightnessFinal = 0;
       int saturationFinal = 0;
       int constrantFinal = 0;
       int hue = 0;
       String glideData;
       FFmpeg ffmpeg;
       String s;
       String imageHeight;
       String imageWidth ;


       private static final String TAG = "BRIJESH";

       Context context = this;

       static {
           System.loadLibrary("NativeImageProcessor");
       }

       public Bitmap resizeImageToNewSize(Bitmap bitmap, int i, int i2) {
           try {
               int width = bitmap.getWidth();
               int height = bitmap.getHeight();
               float f = (float) i;
               float f2 = (float) i2;
               if (!(height == i2 &amp;&amp; width == i)) {
                   float f3 = (float) width;
                   float f4 = f / f3;
                   float f5 = (float) height;
                   float f6 = f2 / f5;
                   if (f4 &lt; f6) {
                       f6 = f4;
                   }
                   f = f3 * f6;
                   f2 = f5 * f6;
               }
               Bitmap bitmap1 = Bitmap.createScaledBitmap(bitmap, (int) f, (int) f2, true);
               fm.removeView(image_preview);
               RelativeLayout.LayoutParams params = new RelativeLayout.LayoutParams(bitmap1.getWidth(), bitmap1.getHeight());

               params.addRule(RelativeLayout.CENTER_HORIZONTAL);
               image_preview.setLayoutParams(params);

               fm.addView(image_preview);
               return bitmap1;
           } catch (Exception unused) {
               fm.removeView(image_preview);
               RelativeLayout.LayoutParams params = new RelativeLayout.LayoutParams(200, 100);

               params.addRule(RelativeLayout.CENTER_IN_PARENT);
               image_preview.setLayoutParams(params);
               fm.addView(image_preview);

               return bitmap;
           }
       }

       @Override
       protected void onCreate(Bundle savedInstanceState) {
           super.onCreate(savedInstanceState);
           setContentView(R.layout.activity_photo_editing);

           initialize(this);
           image_preview = findViewById(R.id.image_preview);
           image_preview.getSource().setScaleType(ImageView.ScaleType.FIT_XY);
           photoEditor = new PhotoEditor.Builder(this, image_preview)
                   .setPinchTextScalable(true)
                   .build();
           image_gif = findViewById(R.id.image_gif);
           linearLayout = findViewById(R.id.linearLayout);
           btn_music_list = findViewById(R.id.btn_music_list);
    //        btn_music_cut = findViewById(R.id.btn_music_cut);
           btn_add_gif = findViewById(R.id.btn_add_gif);
           btn_filters_list = findViewById(R.id.btn_filters_list);
           btn_emoji = findViewById(R.id.btn_emoji);
           btn_add_text = findViewById(R.id.btn_add_text);
           fm = findViewById(R.id.frame);
           rl_replace = findViewById(R.id.replace_fragment);
           music_fragment = findViewById(R.id.music_fragment);
           seekbar_layout = findViewById(R.id.seekbar_layout);
           rangeSeekBar = findViewById(R.id.rangeSeekBar);
           rangeSeekBar.setNotifyWhileDragging(true);
           mHandler = new Handler();
           tvLeft = findViewById(R.id.tvLeft);
           tvRight = findViewById(R.id.tvRight);
           relativeLayout = findViewById(R.id.relativeLayout);
           save = findViewById(R.id.btndone);
           back = findViewById(R.id.btnhome);
           mediaPlayer = new MediaPlayer();
           back.setOnClickListener(new View.OnClickListener() {
               @Override
               public void onClick(View v) {
                   if (mediaPlayer.isPlaying()) {
                       mediaPlayer.stop();
                       onBackPressed();
                   } else onBackPressed();
               }
           });

           sessionId1 = getIntent().getStringExtra("gallary");
           sessionId = getPathFromUri(PhotoEditing.this, Uri.parse(sessionId1));
           image_selected_uri = Uri.parse(sessionId);
           loadImage();

           btn_music_list.setOnClickListener(new View.OnClickListener() {
               @Override
               public void onClick(View v) {
    //                if (seekbar_layout.getVisibility() == View.GONE){
                   if (mediaPlayer != null &amp;&amp; mediaPlayer.isPlaying()) {
                       mediaPlayer.stop();
                   }
                   MusicListFragment musicListFragment = new MusicListFragment();
                   FragmentManager fragmentManager = getSupportFragmentManager();
                   FragmentTransaction transaction = fragmentManager.beginTransaction();
                   transaction.replace(R.id.replace_fragment, musicListFragment);
                   transaction.addToBackStack(null);
                   transaction.commit();
                   seekbar_layout.setVisibility(View.VISIBLE);

    //                }
    //                else if (seekbar_layout.getVisibility() == View.VISIBLE) {
    //                    seekbar_layout.setVisibility(View.GONE);
    //                }
               }
           });

    //        btn_music_cut.setOnClickListener(new View.OnClickListener() {
    //            @Override
    //            public void onClick(View v) {
    //                if (seekbar_layout.getVisibility() == View.GONE) {
    //                    if (mediaPlayer != null &amp;&amp; mediaPlayer.isPlaying()) {
    //                        seekbar_layout.setVisibility(View.VISIBLE);
    //                    }
    //                } else if (seekbar_layout.getVisibility() == View.VISIBLE) {
    //                    seekbar_layout.setVisibility(View.GONE);
    //                }
    //            }
    //        });

           btn_add_gif.setOnClickListener(new View.OnClickListener() {
               @Override
               public void onClick(View v) {
                   GIfFragment gIfFragment = new GIfFragment();
                   FragmentManager fragmentManager = getSupportFragmentManager();
                   FragmentTransaction transaction = fragmentManager.beginTransaction();
                   transaction.replace(R.id.replace_fragment, gIfFragment);
                   transaction.addToBackStack(null);
                   transaction.commit();
               }
           });



           save.setOnClickListener(new View.OnClickListener() {
               @Override
               public void onClick(View v) {
    //                image_preview.getSource().setDrawingCacheEnabled(true);
    //                Bitmap b = image_preview.getSource().getDrawingCache();
    //                MediaStore.Images.Media.insertImage(context.getContentResolver(), b,"", "");
                   saveImageToGallery();

               }
           });
       }

       private void loadImage() {
           bitmap = BitmapFactory.decodeFile(image_selected_uri.toString());
           originalBitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
           originalBitmap = modifyOrientation(originalBitmap, sessionId);
           finalBitmap = originalBitmap.copy(Bitmap.Config.ARGB_8888, true);
           filterdBitmap = originalBitmap.copy(Bitmap.Config.ARGB_8888, true);

           DisplayMetrics displayMetrics = new DisplayMetrics();
           getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
           int f3672y = displayMetrics.widthPixels;
           int f3673z = displayMetrics.heightPixels;
           float f = getResources().getDisplayMetrics().density;
           int i = f3672y - ((int) (40.0f * f));
           int i2 = f3673z - ((int) (f * 100.0f));
           image_preview.getSource().setImageBitmap(resizeImageToNewSize(bitmap, i, i2));
       }

       private void getDropboxIMGSize(Uri uri){

           BitmapFactory.Options options = new BitmapFactory.Options();
           options.inJustDecodeBounds = true;
           BitmapFactory.decodeFile(new File(uri.getPath()).getAbsolutePath(), options);
           imageHeight= String.valueOf(options.outHeight);
           imageWidth = String.valueOf(options.outWidth);

       }

       @SuppressLint("MissingPermission")
       private void saveImageToGallery() {
           File file = new File(Environment.getExternalStorageDirectory()
                   + File.separator + ""
                   + System.currentTimeMillis() + ".jpg");
           try {
               file.createNewFile();

               SaveSettings saveSettings = new SaveSettings.Builder()
                       .setClearViewsEnabled(true)
                       .setTransparencyEnabled(true)
                       .build();

               photoEditor.saveAsFile(file.getAbsolutePath(), saveSettings, new PhotoEditor.OnSaveListener() {
                   @Override
                   public void onSuccess(@NonNull String imagePath) {
                       Toast.makeText(context, "Image Saved", Toast.LENGTH_SHORT).show();

                       mSaveImageUri = Uri.fromFile(new File(imagePath));
                       getDropboxIMGSize(mSaveImageUri);
                       executeCmd(String.valueOf(mSaveImageUri), mediaData, glideData, rangeSeekBar.getSelectedMinValue().intValue() * 1000, rangeSeekBar.getSelectedMaxValue().intValue() * 1000);

                   }

                   @Override
                   public void onFailure(@NonNull Exception exception) {
                   }
               });
           } catch (IOException e) {
               e.printStackTrace();
           }
       }


       private void executeCmd(String imagepath, String songpath, String gif, int startMs, int endMs) {
           File outputLocation = getConvertedFile(outputPath() + "video", "movie_" + System.currentTimeMillis() + ".mp4");
           Log.e("videofilepath", songpath);
           String[] complexCommand = {"-y", "-i", imagepath,
                   "-ignore_loop", "0",
                   "-i", gif, "-filter_complex", "[1:v]scale=w='bitand(iw,65534)':h='bitand(ih,65534)' [ovrl];[0:v][ovrl]overlay=0:0",
                   "-ss", "" + startMs / 1000, "-t", "" + endMs / 1000, "-i", songpath,
                   "-c:v", "libx264", "-preset", "ultrafast", "-r", "30", "-pix_fmt", "yuva420p", "-c:a", "aac", "-shortest", outputLocation.getPath()};

           try {
               ffmpeg.execute(complexCommand, new ExecuteBinaryResponseHandler() {

                   @Override
                   public void onSuccess(String s) {
                       Log.e("onSuccess", s);
                   }

                   @Override
                   public void onFailure(String s) {
                       Log.e("fail", s);
                   }
               });
           } catch (FFmpegCommandAlreadyRunningException e) {
               Log.e("catch", e.getMessage());
           }
       }
    }

    Any idea what is going on here ? Thanks !