Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (59)

  • MediaSPIP : Modification des droits de création d’objets et de publication définitive

    11 novembre 2010, par

    Par défaut, MediaSPIP permet de créer 5 types d’objets.
    Toujours par défaut les droits de création et de publication définitive de ces objets sont réservés aux administrateurs, mais ils sont bien entendu configurables par les webmestres.
    Ces droits sont ainsi bloqués pour plusieurs raisons : parce que le fait d’autoriser à publier doit être la volonté du webmestre pas de l’ensemble de la plateforme et donc ne pas être un choix par défaut ; parce qu’avoir un compte peut servir à autre choses également, (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

Sur d’autres sites (5783)

  • Using ffmpeg to change video framerate without video quality loss and keep audio ?

    31 mai 2020, par Johnny

    I recorded some game videos with my Samsung Galaxy S7 device. When I now add these videos to blender 3d in video editing, I get different video and audio frames.

    



    I think, that this is an fps problem, because on every recorded video I get different fps :
59.01, 59.80 etc.

    



    Now I want to change frame rate on all videos to 60 fps without video quality loss and keeping audio.

    



    So I would not have any problems in video editing.

    



    Does somebody have any tips ?

    


  • Unplayable video after running FFmpeg command

    25 mai 2020, par HB.

    I asked this question last year. I resolved the issue I had and I implemented the same logic for merging an image with a video, instead of two images. This is running on Android.

    



    Here is the command I'm using currently :

    



    "-i", mFilePath, "-i", drawingPath, "-filter_complex", "[0:v]scale=iw*sar:ih,setsar=1,pad='max(iw\\,2*trunc(ih*47/80/2))':'max(ih\\,2*trunc(ow*80/47/2))':(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", outputFPS, outputPath}


    



    47/80/2 is calculated by getting a device's screen dimensions - 1128 x 1920.

    



    When running this on certain devices, it results in an unplayable video.

    



    But running the following command works perfectly fine :

    



    "-i", mFilePath, "-crf", "18", "-c:v", "libx264", "-preset", "ultrafast", outputPath};


    



    I think the issue is with the filter being applied ?

    




    



    I compared running the first command on two different devices.

    



      

    • On the first device (Samsung J7 Pro), I was able to run the command successfully and play the video afterward. I tested the output on both devices and it is working.
    • 


    • On the second device (Sony Xperia Tablet Z), I was able to run the command successfully but could not play the video. I tested the output on both devices and it doesn't play on either. It does play on my computer.
    • 


    



    I compared the original video with the one not working and the one without a filter and the only difference I could find is that the one that is not working profile is Baseline@L4.2 and the one without a filter profile is Baseline@L4.0. The original video profile is High@L4.0.

    



    Here are all the videos. The original, the one without a filter (working) and the one with the filter(no working).

    



    I have no idea why this is happening ? Any help would be appreciated.

    




    



    Edit 1 :

    



    Here is the actual log as requested :

    



    "-i", "/storage/emulated/0/Android/data/com.my.package/files/CameraTemp/2020_05_24_09_17_53.mp4", "-i", "/storage/emulated/0/Android/data/com.my.package/files/MyVideos/tempShapes.png", "-filter_complex", "[0:v]scale=iw*sar:ih,setsar=1,pad='max(iw\\,2*trunc(ih*47/80/2))':'max(ih\\,2*trunc(ow*80/47/2))':(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", "30", "/storage/emulated/0/Android/data/com.my.package/files/MyVideos/video with line.mp4"


    



    and here is the complete log :

    



    ffmpeg version n4.0-39-gda39990 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 4.9.x (GCC) 20150123 (prerelease)
  configuration: --target-os=linux --cross-prefix=/root/bravobit/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/root/bravobit/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-ffprobe --enable-libopus --enable-libvorbis --enable-libfdk-aac --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-libvpx --enable-libass --enable-yasm --enable-pthreads --disable-debug --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-linux-perf --disable-doc --disable-shared --enable-static --enable-runtime-cpudetect --enable-nonfree --enable-network --enable-avresample --enable-avformat --enable-avcodec --enable-indev=lavfi --enable-hwaccels --enable-ffmpeg --enable-zlib --enable-gpl --enable-small --enable-nonfree --pkg-config=pkg-config --pkg-config-flags=--static --prefix=/root/bravobit/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/root/bravobit/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/root/bravobit/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-cxxflags=
  libavutil      56. 14.100 / 56. 14.100
  libavcodec     58. 18.100 / 58. 18.100
  libavformat    58. 12.100 / 58. 12.100
  libavdevice    58.  3.100 / 58.  3.100
  libavfilter     7. 16.100 /  7. 16.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  1.100 /  5.  1.100
  libswresample   3.  1.100 /  3.  1.100
  libpostproc    55.  1.100 / 55.  1.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/Android/data/com.my.package/files/CameraTemp/2020_05_24_09_17_53.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    creation_time   : 2020-05-24T08:18:02.000000Z
  Duration: 00:00:01.64, start: 0.000000, bitrate: 20750 kb/s
    Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), yuv420p, 1920x1080, 18056 kb/s, SAR 1:1 DAR 16:9, 29.70 fps, 29.67 tbr, 90k tbn, 180k tbc (default)
    Metadata:
      creation_time   : 2020-05-24T08:18:02.000000Z
      handler_name    : VideoHandle
    Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 155 kb/s (default)
    Metadata:
      creation_time   : 2020-05-24T08:18:02.000000Z
      handler_name    : SoundHandle
Input #1, png_pipe, from '/storage/emulated/0/Android/data/com.my.package/files/MyVideos/tempShapes.png':
  Duration: N/A, bitrate: N/A
    Stream #1:0: Video: png, rgba(pc), 1920x1128, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
  Stream #0:0 (h264) -> scale (graph 0)
  Stream #1:0 (png) -> scale2ref:default (graph 0)
  overlay (graph 0) -> Stream #0:0 (libx264)
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
frame=    0 fps=0.0 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A    
frame=    0 fps=0.0 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A    
[libx264 @ 0xb83fc8a0] using SAR=1/1
[libx264 @ 0xb83fc8a0] using cpu capabilities: ARMv6 NEON
[libx264 @ 0xb83fc8a0] profile Constrained Baseline, level 4.2
[libx264 @ 0xb83fc8a0] 264 - core 152 r2851M ba24899 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=2 keyint_min=1 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
Output #0, mp4, to '/storage/emulated/0/Android/data/com.my.package/files/MyVideos/video with line.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    encoder         : Lavf58.12.100
    Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1920x1128 [SAR 1:1 DAR 80:47], q=-1--1, 29 fps, 14848 tbn, 29 tbc (default)
    Metadata:
      encoder         : Lavc58.18.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      creation_time   : 2020-05-24T08:18:02.000000Z
      handler_name    : SoundHandle
      encoder         : Lavc58.18.100 aac
frame=    1 fps=0.4 q=0.0 size=       0kB time=00:00:01.01 bitrate=   0.4kbits/s speed=0.397x    
frame=    5 fps=1.6 q=0.0 size=       0kB time=00:00:01.01 bitrate=   0.4kbits/s speed=0.33x    
frame=    9 fps=2.5 q=24.0 size=     256kB time=00:00:01.01 bitrate=2075.0kbits/s speed=0.28x    
frame=   13 fps=3.1 q=25.0 size=    1024kB time=00:00:01.01 bitrate=8298.9kbits/s speed=0.243x    
frame=   18 fps=3.8 q=29.0 size=    2048kB time=00:00:01.01 bitrate=16597.5kbits/s speed=0.214x    
frame=   21 fps=3.9 q=25.0 size=    2560kB time=00:00:01.01 bitrate=20746.7kbits/s speed=0.19x    
frame=   23 fps=3.9 q=25.0 size=    2816kB time=00:00:01.01 bitrate=22821.4kbits/s speed=0.173x    
frame=   26 fps=4.0 q=29.0 size=    3584kB time=00:00:01.01 bitrate=29045.3kbits/s speed=0.156x    
Past duration 0.617577 too large
Past duration 0.639641 too large
frame=   28 fps=3.9 q=29.0 size=    3840kB time=00:00:01.01 bitrate=31119.9kbits/s speed=0.142x    
Past duration 0.665230 too large
frame=   29 fps=3.8 q=25.0 size=    3840kB time=00:00:01.01 bitrate=31119.9kbits/s speed=0.132x    
Past duration 0.690834 too large
Past duration 0.711281 too large
Past duration 0.736885 too large
frame=   32 fps=3.9 q=29.0 size=    4608kB time=00:00:01.01 bitrate=37343.8kbits/s speed=0.123x    
Past duration 0.762474 too large
Past duration 0.783577 too large
Past duration 0.807564 too large
frame=   35 fps=3.9 q=25.0 size=    4864kB time=00:00:01.01 bitrate=39418.4kbits/s speed=0.112x    
Past duration 0.831551 too large
Past duration 0.855537 too large
frame=   37 fps=3.5 q=25.0 size=    5376kB time=00:00:01.01 bitrate=43567.7kbits/s speed=0.0968x    
Past duration 0.879524 too large
Past duration 0.903511 too large
frame=   39 fps=3.4 q=25.0 size=    5376kB time=00:00:01.06 bitrate=41196.6kbits/s speed=0.0927x    
Past duration 0.927498 too large
Past duration 0.951500 too large
frame=   41 fps=3.4 q=25.0 size=    5376kB time=00:00:01.13 bitrate=38700.0kbits/s speed=0.0931x    
frame=   41 fps=3.2 q=25.0 size=    5376kB time=00:00:01.13 bitrate=38700.0kbits/s speed=0.0886x    
frame=   41 fps=3.1 q=25.0 size=    5888kB time=00:00:01.43 bitrate=33554.2kbits/s speed=0.108x    
Past duration 0.975487 too large
frame=   45 fps=3.2 q=26.0 size=    6656kB time=00:00:01.60 bitrate=33905.4kbits/s speed=0.114x    
frame=   45 fps=3.0 q=-1.0 Lsize=    8158kB time=00:00:01.65 bitrate=40480.7kbits/s speed=0.11x    
video:8127kB audio:28kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.032895%
[libx264 @ 0xb83fc8a0] frame I:23    Avg QP:24.70  size:337646
[libx264 @ 0xb83fc8a0] frame P:22    Avg QP:29.00  size: 25250
[libx264 @ 0xb83fc8a0] mb I  I16..4: 100.0%  0.0%  0.0%
[libx264 @ 0xb83fc8a0] mb P  I16..4:  0.4%  0.0%  0.0%  P16..4: 43.6%  0.0%  0.0%  0.0%  0.0%    skip:56.0%
[libx264 @ 0xb83fc8a0] coded y,uvDC,uvAC intra: 90.0% 84.7% 58.1% inter: 20.1% 6.2% 0.1%
[libx264 @ 0xb83fc8a0] i16 v,h,dc,p: 25% 28% 28% 20%
[libx264 @ 0xb83fc8a0] i8c dc,h,v,p: 39% 25% 20% 16%
[libx264 @ 0xb83fc8a0] kb/s:42901.20
[aac @ 0xb83d7d10] Qavg: 3517.779


    


  • Decode mp3 using FFMpeg, Android NDK - What is wrong with my AVFormatContext ?

    27 février 2020, par michpohl

    I am trying to decode am MP3 file to a raw PCM stream using FFMpeg via JNI on Android. I have compiled the latest FFMpeg version (4.2) and added it to my app. This did not make any problems.
    The goal is to be able to use mp3 files from the device’s storage for playback with oboe

    Since I am relatively inexperienced with both C++ and FFMpeg, my approach is based upon this :
    oboe’s RhythmGame example

    I have based my FFMpegExtractorclass on the one found in the example here. With the help of StackOverflow the AAssetManageruse was removed and instead a MediaSource helper class now serves as a wrapper for my stream (see here)

    But unfortunately, creating the AVFormatContext doesn’t work right - and I can’t seem to understand why. Since I have very limited understanding of correct pointer usage and C++ memory management, I suspect it’s most likely I’m doing something wrong in that area. But honestly, I have no idea.

    This is my FFMpegExtractor.h :

    #define MYAPP_FFMPEGEXTRACTOR_H

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "MediaSource.cpp"


    class FFMpegExtractor {
    public:

       FFMpegExtractor();

       ~FFMpegExtractor();

       int64_t decode2(char *filepath, uint8_t *targetData, AudioProperties targetProperties);

    private:
       MediaSource *mSource;

       bool createAVFormatContext(AVIOContext *avioContext, AVFormatContext **avFormatContext);

       bool openAVFormatContext(AVFormatContext *avFormatContext);

       int32_t cleanup(AVIOContext *avioContext, AVFormatContext *avFormatContext);

       bool getStreamInfo(AVFormatContext *avFormatContext);

       AVStream *getBestAudioStream(AVFormatContext *avFormatContext);

       AVCodec *findCodec(AVCodecID id);

       void printCodecParameters(AVCodecParameters *params);

       bool createAVIOContext2(const std::string &amp;filePath, uint8_t *buffer, uint32_t bufferSize,
                               AVIOContext **avioContext);
    };


    #endif //MYAPP_FFMPEGEXTRACTOR_H
    </fstream></cstdint>

    This is FFMPegExtractor.cpp :

    #include <memory>
    #include <oboe></oboe>Definitions.h>
    #include "FFMpegExtractor.h"
    #include "logging.h"
    #include <fstream>

    FFMpegExtractor::FFMpegExtractor() {
       mSource = new MediaSource;
    }

    FFMpegExtractor::~FFMpegExtractor() {
       delete mSource;
    }

    constexpr int kInternalBufferSize = 1152; // Use MP3 block size. https://wiki.hydrogenaud.io/index.php?title=MP3

    /**
    * Reads from an IStream into FFmpeg.
    *
    * @param ptr       A pointer to the user-defined IO data structure.
    * @param buf       A buffer to read into.
    * @param buf_size  The size of the buffer buff.
    *
    * @return The number of bytes read into the buffer.
    */


    // If FFmpeg needs to read the file, it will call this function.
    // We need to fill the buffer with file's data.
    int read(void *opaque, uint8_t *buffer, int buf_size) {
       MediaSource *source = (MediaSource *) opaque;
       return source->read(buffer, buf_size);
    }

    // If FFmpeg needs to seek in the file, it will call this function.
    // We need to change the read pos.
    int64_t seek(void *opaque, int64_t offset, int whence) {
       MediaSource *source = (MediaSource *) opaque;
       return source->seek(offset, whence);
    }


    // Create and save a MediaSource instance.
    bool FFMpegExtractor::createAVIOContext2(const std::string &amp;filepath, uint8_t *buffer, uint32_t bufferSize,
                                            AVIOContext **avioContext) {

       mSource = new MediaSource;
       mSource->open(filepath);
       constexpr int isBufferWriteable = 0;

       *avioContext = avio_alloc_context(
               buffer, // internal buffer for FFmpeg to use
               bufferSize, // For optimal decoding speed this should be the protocol block size
               isBufferWriteable,
               mSource, // Will be passed to our callback functions as a (void *)
               read, // Read callback function
               nullptr, // Write callback function (not used)
               seek); // Seek callback function

       if (*avioContext == nullptr) {
           LOGE("Failed to create AVIO context");
           return false;
       } else {
           return true;
       }
    }

    bool
    FFMpegExtractor::createAVFormatContext(AVIOContext *avioContext,
                                          AVFormatContext **avFormatContext) {

       *avFormatContext = avformat_alloc_context();
       (*avFormatContext)->pb = avioContext;

       if (*avFormatContext == nullptr) {
           LOGE("Failed to create AVFormatContext");
           return false;
       } else {
           LOGD("Successfully created AVFormatContext");
           return true;
       }
    }

    bool FFMpegExtractor::openAVFormatContext(AVFormatContext *avFormatContext) {

       int result = avformat_open_input(&amp;avFormatContext,
                                        "", /* URL is left empty because we're providing our own I/O */
                                        nullptr /* AVInputFormat *fmt */,
                                        nullptr /* AVDictionary **options */
       );

       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to open file. Error code %s", av_err2str(result));
           return false;
       }
    }

    bool FFMpegExtractor::getStreamInfo(AVFormatContext *avFormatContext) {

       int result = avformat_find_stream_info(avFormatContext, nullptr);
       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to find stream info. Error code %s", av_err2str(result));
           return false;
       }
    }

    AVStream *FFMpegExtractor::getBestAudioStream(AVFormatContext *avFormatContext) {

       int streamIndex = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);

       if (streamIndex &lt; 0) {
           LOGE("Could not find stream");
           return nullptr;
       } else {
           return avFormatContext->streams[streamIndex];
       }
    }

    int64_t FFMpegExtractor::decode2(
           char* filepath,
           uint8_t *targetData,
           AudioProperties targetProperties) {

       LOGD("Decode SETUP");
       int returnValue = -1; // -1 indicates error

       // Create a buffer for FFmpeg to use for decoding (freed in the custom deleter below)
       auto buffer = reinterpret_cast(av_malloc(kInternalBufferSize));


       // Create an AVIOContext with a custom deleter
       std::unique_ptr ioContext{
               nullptr,
               [](AVIOContext *c) {
                   av_free(c->buffer);
                   avio_context_free(&amp;c);
               }
       };
       {
           AVIOContext *tmp = nullptr;
           if (!createAVIOContext2(filepath, buffer, kInternalBufferSize, &amp;tmp)) {
               LOGE("Could not create an AVIOContext");
               return returnValue;
           }
           ioContext.reset(tmp);
       }
       // Create an AVFormatContext using the avformat_free_context as the deleter function
       std::unique_ptr formatContext{
               nullptr,
               &amp;avformat_free_context
       };

       {
           AVFormatContext *tmp;
           if (!createAVFormatContext(ioContext.get(), &amp;tmp)) return returnValue;
           formatContext.reset(tmp);
       }
       if (!openAVFormatContext(formatContext.get())) return returnValue;
       LOGD("172");

       if (!getStreamInfo(formatContext.get())) return returnValue;
       LOGD("175");

       // Obtain the best audio stream to decode
       AVStream *stream = getBestAudioStream(formatContext.get());
       if (stream == nullptr || stream->codecpar == nullptr) {
           LOGE("Could not find a suitable audio stream to decode");
           return returnValue;
       }
       LOGD("183");

       printCodecParameters(stream->codecpar);

       // Find the codec to decode this stream
       AVCodec *codec = avcodec_find_decoder(stream->codecpar->codec_id);
       if (!codec) {
           LOGE("Could not find codec with ID: %d", stream->codecpar->codec_id);
           return returnValue;
       }

       // Create the codec context, specifying the deleter function
       std::unique_ptr codecContext{
               nullptr,
               [](AVCodecContext *c) { avcodec_free_context(&amp;c); }
       };
       {
           AVCodecContext *tmp = avcodec_alloc_context3(codec);
           if (!tmp) {
               LOGE("Failed to allocate codec context");
               return returnValue;
           }
           codecContext.reset(tmp);
       }

       // Copy the codec parameters into the context
       if (avcodec_parameters_to_context(codecContext.get(), stream->codecpar) &lt; 0) {
           LOGE("Failed to copy codec parameters to codec context");
           return returnValue;
       }

       // Open the codec
       if (avcodec_open2(codecContext.get(), codec, nullptr) &lt; 0) {
           LOGE("Could not open codec");
           return returnValue;
       }

       // prepare resampler
       int32_t outChannelLayout = (1 &lt;&lt; targetProperties.channelCount) - 1;
       LOGD("Channel layout %d", outChannelLayout);

       SwrContext *swr = swr_alloc();
       av_opt_set_int(swr, "in_channel_count", stream->codecpar->channels, 0);
       av_opt_set_int(swr, "out_channel_count", targetProperties.channelCount, 0);
       av_opt_set_int(swr, "in_channel_layout", stream->codecpar->channel_layout, 0);
       av_opt_set_int(swr, "out_channel_layout", outChannelLayout, 0);
       av_opt_set_int(swr, "in_sample_rate", stream->codecpar->sample_rate, 0);
       av_opt_set_int(swr, "out_sample_rate", targetProperties.sampleRate, 0);
       av_opt_set_int(swr, "in_sample_fmt", stream->codecpar->format, 0);
       av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_FLT, 0);
       av_opt_set_int(swr, "force_resampling", 1, 0);

       // Check that resampler has been inited
       int result = swr_init(swr);
       if (result != 0) {
           LOGE("swr_init failed. Error: %s", av_err2str(result));
           return returnValue;
       };
       if (!swr_is_initialized(swr)) {
           LOGE("swr_is_initialized is false\n");
           return returnValue;
       }

       // Prepare to read data
       int bytesWritten = 0;
       AVPacket avPacket; // Stores compressed audio data
       av_init_packet(&amp;avPacket);
       AVFrame *decodedFrame = av_frame_alloc(); // Stores raw audio data
       int bytesPerSample = av_get_bytes_per_sample((AVSampleFormat) stream->codecpar->format);

       LOGD("Bytes per sample %d", bytesPerSample);

       // While there is more data to read, read it into the avPacket
       while (av_read_frame(formatContext.get(), &amp;avPacket) == 0) {

           if (avPacket.stream_index == stream->index) {

               while (avPacket.size > 0) {
                   // Pass our compressed data into the codec
                   result = avcodec_send_packet(codecContext.get(), &amp;avPacket);
                   if (result != 0) {
                       LOGE("avcodec_send_packet error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // Retrieve our raw data from the codec
                   result = avcodec_receive_frame(codecContext.get(), decodedFrame);
                   if (result != 0) {
                       LOGE("avcodec_receive_frame error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // DO RESAMPLING
                   auto dst_nb_samples = (int32_t) av_rescale_rnd(
                           swr_get_delay(swr, decodedFrame->sample_rate) + decodedFrame->nb_samples,
                           targetProperties.sampleRate,
                           decodedFrame->sample_rate,
                           AV_ROUND_UP);

                   short *buffer1;
                   av_samples_alloc(
                           (uint8_t **) &amp;buffer1,
                           nullptr,
                           targetProperties.channelCount,
                           dst_nb_samples,
                           AV_SAMPLE_FMT_FLT,
                           0);
                   int frame_count = swr_convert(
                           swr,
                           (uint8_t **) &amp;buffer1,
                           dst_nb_samples,
                           (const uint8_t **) decodedFrame->data,
                           decodedFrame->nb_samples);

                   int64_t bytesToWrite = frame_count * sizeof(float) * targetProperties.channelCount;
                   memcpy(targetData + bytesWritten, buffer1, (size_t) bytesToWrite);
                   bytesWritten += bytesToWrite;
                   av_freep(&amp;buffer1);

                   avPacket.size = 0;
                   avPacket.data = nullptr;
               }
           }
       }

       av_frame_free(&amp;decodedFrame);

       returnValue = bytesWritten;

       cleanup:
       return returnValue;
    }

    void FFMpegExtractor::printCodecParameters(AVCodecParameters *params) {

       LOGD("Stream properties");
       LOGD("Channels: %d", params->channels);
       LOGD("Channel layout: %"
                    PRId64, params->channel_layout);
       LOGD("Sample rate: %d", params->sample_rate);
       LOGD("Format: %s", av_get_sample_fmt_name((AVSampleFormat) params->format));
       LOGD("Frame size: %d", params->frame_size);
    }
    </fstream></memory>

    And this is the MediaSource.cpp :

    #ifndef MYAPP_MEDIASOURCE_CPP
    #define MYAPP_MEDIASOURCE_CPP

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "logging.h"

    // wrapper class for file stream
    class MediaSource {
    public:

       MediaSource() {
       }

       ~MediaSource() {
           source.close();
       }

       void open(const std::string &amp;filePath) {
           const char *x = filePath.c_str();
           LOGD("Opened %s", x);
           source.open(filePath, std::ios::in | std::ios::binary);
       }

       int read(uint8_t *buffer, int buf_size) {
           // read data to buffer
           source.read((char *) buffer, buf_size);
           // return how many bytes were read
           return source.gcount();
       }

       int64_t seek(int64_t offset, int whence) {
           if (whence == AVSEEK_SIZE) {
               // FFmpeg needs file size.
               int oldPos = source.tellg();
               source.seekg(0, std::ios::end);
               int64_t length = source.tellg();
               // seek to old pos
               source.seekg(oldPos);
               return length;
           } else if (whence == SEEK_SET) {
               // set pos to offset
               source.seekg(offset);
           } else if (whence == SEEK_CUR) {
               // add offset to pos
               source.seekg(offset, std::ios::cur);
           } else {
               // do not support other flags, return -1
               return -1;
           }
           // return current pos
           return source.tellg();
       }

    private:
       std::ifstream source;
    };

    #endif //MYAPP_MEDIASOURCE_CPP
    </fstream></cstdint>

    When the code is executed, I can see that I submit the correct file path, so I assume the resource mp3 is there.
    When this code is executed the app crashes in line 103 of FFMpegExtractor.cpp, at formatContext.reset(tmp);

    This is what Android Studio logs when the app crashes :

    --------- beginning of crash
    2020-02-27 14:31:26.341 9852-9945/com.user.myapp A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x7fffffff0 in tid 9945 (chaelpohl.loopy), pid 9852 (user.myapp)

    This is the (sadly very short) output I get with ndk-stack :

    ********** Crash dump: **********
    Build fingerprint: 'samsung/dreamltexx/dreamlte:9/PPR1.180610.011/G950FXXU6DSK9:user/release-keys'
    #00 0x0000000000016c50 /data/app/com.user.myapp-D7dBCgHF-vdQNNSald4lWA==/lib/arm64/libavformat.so (avformat_free_context+260)
                                                                                                            avformat_free_context
                                                                                                            ??:0:0
    Crash dump is completed

    I tested a bit around, and every call to my formatContext crashes the app. So I assume there is something wrong with the input I provide to build it but I have no clue how to debug this.

    Any help is appreciated ! (Happy to provide additional resources if something crucial is missing).