Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (48)

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (7540)

  • Gstreamer AAC encoding no more supported ?

    22 juillet 2016, par Gianks

    i’d like to include AAC as one of the compatible formats in my app but i’m having troubles with its encoding.
    FAAC seems to be missing in GStreamer-1.0 Debian-derived packages (see Ubuntu) and the main reason for that (if i got it correctly) is the presence of avenc_aac (Lunchpad bugreport) as a replacement.

    I’ve tried the following :

    gst-launch-1.0 filesrc location="src.avi" ! tee name=t  t.! queue ! decodebin ! progressreport ! x264enc ! mux. t.! queue ! decodebin ! audioconvert ! audioresample ! avenc_aac compliance=-2 ! mux. avmux_mpegts name=mux ! filesink location=/tmp/test.avi

    It hangs prerolling with :

    ERROR libav :0:: AAC bitstream not in ADTS format and extradata missing

    Using mpegtsmux instead of avmux_mpegts seems to work since the file is created but it results with no working audio (with some players it’s completely unplayable).

    This is the trace of mplayer :

    Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders
    [aac @ 0x7f2860d6c3c0]channel element 3.15 is not allocated
    [aac @ 0x7f2860d6c3c0]Sample rate index in program config element does not match the sample rate index configured by the container.
    [aac @ 0x7f2860d6c3c0]Inconsistent channel configuration.
    [aac @ 0x7f2860d6c3c0]get_buffer() failed
    [aac @ 0x7f2860d6c3c0]Assuming an incorrectly encoded 7.1 channel layout instead of a spec-compliant 7.1(wide) layout, use -strict 1 to decode according to the specification instead.
    [aac @ 0x7f2860d6c3c0]Reserved bit set.
    [aac @ 0x7f2860d6c3c0]Number of bands (20) exceeds limit (14).
    [aac @ 0x7f2860d6c3c0]invalid band type
    [aac @ 0x7f2860d6c3c0]More than one AAC RDB per ADTS frame is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
    [aac @ 0x7f2860d6c3c0]Reserved bit set.
    [aac @ 0x7f2860d6c3c0]Number of bands (45) exceeds limit (28).
    Unknown/missing audio format -> no sound
    ADecoder init failed :(
    Opening audio decoder: [faad] AAC (MPEG2/4 Advanced Audio Coding)
    FAAD: compressed input bitrate missing, assuming 128kbit/s!
    AUDIO: 44100 Hz, 2 ch, floatle, 128.0 kbit/9.07% (ratio: 16000->176400)
    Selected audio codec: [faad] afm: faad (FAAD AAC (MPEG-2/MPEG-4 Audio))
    ==========================================================================
    AO: [pulse] 44100Hz 2ch floatle (4 bytes per sample)
    Starting playback...
    FAAD: error: Bitstream value not allowed by specification, trying to resync!
    FAAD: error: Invalid number of channels, trying to resync!
    FAAD: error: Invalid number of channels, trying to resync!
    FAAD: error: Bitstream value not allowed by specification, trying to resync!
    FAAD: error: Invalid number of channels, trying to resync!
    FAAD: error: Bitstream value not allowed by specification, trying to resync!
    FAAD: error: Channel coupling not yet implemented, trying to resync!
    FAAD: error: Invalid number of channels, trying to resync!
    FAAD: error: Invalid number of channels, trying to resync!
    FAAD: error: Bitstream value not allowed by specification, trying to resync!
    FAAD: Failed to decode frame: Bitstream value not allowed by specification
    Movie-Aspect is 1.33:1 - prescaling to correct movie aspect.
    VO: [vdpau] 640x480 => 640x480 Planar YV12
    A:3602.2 V:3600.0 A-V:  2.143 ct:  0.000   3/  3 ??% ??% ??,?% 0 0
    FAAD: error: Array index out of range, trying to resync!
    FAAD: error: Bitstream value not allowed by specification, trying to resync!
    FAAD: error: Bitstream value not allowed by specification, trying to resync!
    FAAD: error: Unexpected fill element with SBR data, trying to resync!
    FAAD: error: Bitstream value not allowed by specification, trying to resync!
    FAAD: error: Bitstream value not allowed by specification, trying to resync!
    FAAD: error: Channel coupling not yet implemented, trying to resync!
    FAAD: error: Invalid number of channels, trying to resync!
    FAAD: error: PCE shall be the first element in a frame, trying to resync!
    FAAD: error: Invalid number of channels, trying to resync!
    FAAD: Failed to decode frame: Invalid number of channels
    A:3602.2 V:3600.1 A-V:  2.063 ct:  0.000   4/  4 ??% ??% ??,?% 0 0

    These the messages produced by VLC (10 seconds of playback) :

    ts info: MPEG-4 descriptor not found for pid 0x42 type 0xf
    core error: option sub-original-fps does not exist
    subtitle warning: failed to recognize subtitle type
    core error: no suitable demux module for `file/subtitle:///tmp//test.avi.idx'
    avcodec info: Using NVIDIA VDPAU Driver Shared Library 361.42 Tue Mar 22 17:29:16 PDT 2016 for hardware decoding.
    core warning: VoutDisplayEvent 'pictures invalid'
    core warning: VoutDisplayEvent 'pictures invalid'
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio warning: ADTS CRC not supported
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio warning: ADTS CRC not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio warning: Invalid ADTS header
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
    packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported

    Using the error of the hanging pipeline I’ve finally discovered that avenc_aac should be told in such way to output the data NOT in RAW AAC but in ADTS AAC, the point is that i’ve no idea how to do that with Gstreamer. See here, bottom of the page : FFMPEG Ticket

    At this point since i’ve found no documentation seems right to say we have no support for AAC encoding in GStreamer... which isn’t true, i guess ! (IMHO anyway seems strange the missing of FAAC if AVENC_AAC requires all the time to be set in experimental mode)

    Can someone propose a working pipeline for this ?

    UPDATE

    After some more research i’ve found (via gst-inspect on avenc_aac) what i’m probably looking for but i don’t know how to setup it as needed.
    Have a look at stream-format :

    Pad Templates:
     SRC template: 'src'
       Availability: Always
       Capabilities:
         audio/mpeg
                  channels: [ 1, 6 ]
                      rate: [ 4000, 96000 ]
               mpegversion: 4
             stream-format: raw
           base-profile: lc

    Thanks

  • live video streaming not playing on dash.js player using ffmpeg, nginx-rtmp in raspberry pi

    15 août 2016, par sparks

    I am trying to stream a live video from raspberry pi & pi camera module to web browser.But the video doesn’t play.I am using Nginx-rtmp and ffmpeg to process the video. On the client side i use dash js player trying to play the video but no luck.I see the warning on the terminal

    [flv @ 0x2c3b950] Failed to update header with correct duration.
    [flv @ 0x2c3b950] Failed to update header with correct filesize.

    But i am not sure if this could be the issue. Anybody knows what might be wrong ?I don’t expect a straight answer but i am sure somebody can guide me in the right direction. Thank you in advance. Here is my set up

    Nginx.conf

    events {
     worker_connections  1024;
    }


    http     {
           include       mime.types;
           default_type  application/octet-stream;


    sendfile        on;

    keepalive_timeout  65;

    server {
       listen       80;
       server_name 192.168.1.114 localhost;
       location / {
           root   /var/www;
           index  index.html index.htm;
       }


       location /dash {
           root /var/www;
           add_header Cache-Control no-cache;
       }

        location /dash.js{
           root /var/www;
         }

        location /hls {
          types {
           application/vnd.apple.mpegurl m3u8;
           video/mp2t ts;
         }

         root /var/www;
         index index.html;
         add_header Cache-Control no-cache;
       }

       location /rtmpcontrol{
           rtmp_control all;
        }

       location /rtmpstat{
           rtmp_stat all;
        }

       #error_page  404              /404.html;


              error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   html;
       }

          }
    rtmp {
      server {
       listen 1935;

       chunk_size 4000;

       application rtmp {
           live on;
           hls on;
           dash on;
           dash_path /var/www/dash;
           hls_path /var/www/hls;
       }
     }
    }

    baseline.html

       
       
       
       

       
       

       
       

       
       <code class="echappe-js">&lt;script src='http://stackoverflow.com/feeds/tag/dash.all.js'&gt;&lt;/script&gt;
    &lt;script&gt;<br />
           function getUrlVars() {<br />
               var vars = {};<br />
               var parts = window.location.href.replace(/[?&amp;amp;]+([^=&amp;amp;]+)=([^&amp;amp;]*)/gi, function(m,key,value) {<br />
                   vars[key] = value;<br />
               });<br />
               return vars;<br />
           }<br />
    <br />
           function startVideo() {<br />
               var vars = getUrlVars(),<br />
                   url = &quot;http://192.168.1.114:80/dash/stream.mpd&quot;,<br />
                   video,<br />
                   context,<br />
                   player;<br />
    <br />
               if (vars &amp;amp;&amp;amp; vars.hasOwnProperty(&quot;url&quot;)) {<br />
                   url = vars.url;<br />
               }<br />
    <br />
               video = document.querySelector(&quot;.dash-video-player video&quot;);<br />
               context = new Dash.di.DashContext();<br />
               player = new MediaPlayer(context);<br />
    <br />
               player.startup();<br />
    <br />
               player.attachView(video);<br />
               player.setAutoPlay(false);<br />
    <br />
               player.attachSource(url);<br />
           }<br />
       &lt;/script&gt;

    &lt;body onload=&quot;startVideo()&quot;&gt;

    How i grab and push video stream

    raspivid -w 640 -h 480 -fps 25 -t 0 -b 1800000 -o - | ffmpeg -y -f h264 -i - -vcodec libx264  -f flv -rtmp_buffer 100 -rtmp_live live rtmp://localhost:1935/rtmp/stream

    When i run the above command, i see the following on the terminal

       ffmpeg version N-81256-gd3426fb Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 4.9.2 (Raspbian 4.9.2-10)
     configuration: --enable-gpl --enable-libx264 --enable-nonfree
     libavutil      55. 28.100 / 55. 28.100
     libavcodec     57. 51.100 / 57. 51.100
     libavformat    57. 44.100 / 57. 44.100
     libavdevice    57.  0.102 / 57.  0.102
     libavfilter     6. 49.100 /  6. 49.100
     libswscale      4.  1.100 /  4.  1.100
     libswresample   2.  1.100 /  2.  1.100
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, h264, from 'pipe:':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 640x480, 25 fps, 25 tbr, 1200k tbn, 50 tbc
    [tcp @ 0x2c3c850] Connection to tcp://localhost:1935 failed (Connection refused), trying next address
    [libx264 @ 0x2c50d80] using cpu capabilities: ARMv6 NEON
    [libx264 @ 0x2c50d80] profile High, level 3.0
    [libx264 @ 0x2c50d80] 264 - core 148 r2705 3f5ed56 - H.264/MPEG-4 AVC codec - Copyleft 2003-2016 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    [flv @ 0x2c3b950] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
    Output #0, flv, to 'rtmp://localhost:1935/rtmp/stream':
     Metadata:
       encoder         : Lavf57.44.100
       Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p, 640x480, q=-1--1, 25 fps, 1k tbn, 25 tbc
       Metadata:
         encoder         : Lavc57.51.100 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
    ^Cmmal: Aborting program.0 size=     930kB time=00:00:45.44 bitrate= 167.7kbits/s speed=1.03x    

    [flv @ 0x2c3b950] Failed to update header with correct duration.
    [flv @ 0x2c3b950] Failed to update header with correct filesize.
    frame= 1197 fps= 26 q=-1.0 Lsize=     976kB time=00:00:47.76 bitrate= 167.5kbits/s speed=1.04x    
    video:953kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.484349%
    [libx264 @ 0x2c50d80] frame I:5     Avg QP:18.91  size:  3600
    [libx264 @ 0x2c50d80] frame P:299   Avg QP:21.38  size:  1486
    [libx264 @ 0x2c50d80] frame B:893   Avg QP:20.62  size:   574
    [libx264 @ 0x2c50d80] consecutive B-frames:  0.4%  0.0%  1.0% 98.6%
    [libx264 @ 0x2c50d80] mb I  I16..4: 10.1% 87.1%  2.8%
    [libx264 @ 0x2c50d80] mb P  I16..4:  3.8%  7.4%  0.0%  P16..4: 34.2%  2.1%  1.5%  0.0%  0.0%    skip:51.0%
    [libx264 @ 0x2c50d80] mb B  I16..4:  0.2%  0.3%  0.0%  B16..8: 20.1%  0.4%  0.0%  direct: 3.8%  skip:75.1%  L0:48.6% L1:50.9% BI: 0.6%
    [libx264 @ 0x2c50d80] 8x8 transform intra:68.1% inter:97.0%
    [libx264 @ 0x2c50d80] coded y,uvDC,uvAC intra: 10.1% 27.5% 1.8% inter: 2.0% 12.9% 0.1%
    [libx264 @ 0x2c50d80] i16 v,h,dc,p: 18% 20%  9% 53%
    [libx264 @ 0x2c50d80] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 11% 58%  2%  2%  1%  2%  1%  1%
    [libx264 @ 0x2c50d80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 26% 30%  3%  4%  4%  6%  2%  2%
    [libx264 @ 0x2c50d80] i8c dc,h,v,p: 66% 18% 15%  1%
    [libx264 @ 0x2c50d80] Weighted P-Frames: Y:1.7% UV:1.3%
    [libx264 @ 0x2c50d80] ref P L0: 58.2%  2.2% 25.5% 14.1%  0.1%
    [libx264 @ 0x2c50d80] ref B L0: 81.7% 13.3%  5.0%
    [libx264 @ 0x2c50d80] ref B L1: 93.6%  6.4%
    [libx264 @ 0x2c50d80] kb/s:162.87
    Exiting normally, received signal 2.

    On my var/www/dash directory

    stream-0.m4a  stream-19200.m4a  stream-29200.m4a  
    stream-9600.m4a    stream-init.m4a  stream.mpd      stream-raw.m4v
    stream-0.m4v  stream-19200.m4v  stream-29200.m4v  stream-9600.m4v    stream-init.m4v  stream-raw.m4a
  • Ffmpeg in Android Studio

    7 décembre 2016, par Ahmed Abd-Elmeged

    i’am trying to add Ffmpeg library to use it in Live Streaming app based in Youtube APi when i add it and i use Cmake Build tool the header files only shown in the c native class which i create and i want it to be included in the other header files
    which came with library how can i do it in cmake script or onther way ?

    This my cmake build script

    # Sets the minimum version of CMake required to build the native
    # library. You should either keep the default value or only pass a
    # value of 3.4.0 or lower.

    cmake_minimum_required(VERSION 3.4.1)

    # Creates and names a library, sets it as either STATIC
    # or SHARED, and provides the relative paths to its source code.
    # You can define multiple libraries, and CMake builds it for you.
    # Gradle automatically packages shared libraries with your APK.

    add_library( # Sets the name of the library.
                ffmpeg-jni

                # Sets the library as a shared library.
                SHARED

                # Provides a relative path to your source file(s).
                # Associated headers in the same location as their source
                # file are automatically included.
                src/main/cpp/ffmpeg-jni.c )

    # Searches for a specified prebuilt library and stores the path as a
    # variable. Because system libraries are included in the search path by
    # default, you only need to specify the name of the public NDK library
    # you want to add. CMake verifies that the library exists before
    # completing its build.

    find_library( # Sets the name of the path variable.
                 log-lib

                 # Specifies the name of the NDK library that
                 # you want CMake to locate.
                 log )

    # Specifies libraries CMake should link to your target library. You
    # can link multiple libraries, such as libraries you define in the
    # build script, prebuilt third-party libraries, or system libraries.

    target_link_libraries( # Specifies the target library.
                          ffmpeg-jni

                          # Links the target library to the log library
                          # included in the NDK.
                          ${log-lib} )

    include_directories(src/main/cpp/include/ffmpeglib)

    include_directories(src/main/cpp/include/libavcodec)

    include_directories(src/main/cpp/include/libavformat)

    include_directories(src/main/cpp/include/libavutil)

    This my native class i create as native file i android studio and i also have an error in jintJNIcall not found

    #include <android></android>log.h>
    #include
    #include

    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
    #include "libavutil/opt.h"



    #ifdef __cplusplus
    extern "C" {
    #endif

    JNIEXPORT jboolean
    JNICALL Java_com_example_venrto_ffmpegtest_Ffmpeg_init(JNIEnv *env, jobject thiz,
                                                            jint width, jint height,
                                                            jint audio_sample_rate,
                                                            jstring rtmp_url);
    JNIEXPORT void JNICALL Java_com_example_venrto_ffmpegtest_Ffmpeg_shutdown(JNIEnv
                                                                               *env,
                                                                               jobject thiz
    );
    JNIEXPORT jintJNICALL
           Java_com_example_venrto_ffmpegtest_Ffmpeg_encodeVideoFrame(JNIEnv
                                                                        *env,
                                                                        jobject thiz,
                                                                        jbyteArray
                                                                        yuv_image);
    JNIEXPORT jint
    JNICALL Java_com_example_venrto_ffmpegtest_Ffmpeg_encodeAudioFrame(JNIEnv *env,
                                                                        jobject thiz,
                                                                        jshortArray audio_data,
                                                                        jint length);

    #ifdef __cplusplus
    }
    #endif

    #define LOGI(...) __android_log_print(ANDROID_LOG_INFO, "ffmpeg-jni", __VA_ARGS__)
    #define URL_WRONLY 2
    static AVFormatContext *fmt_context;
    static AVStream *video_stream;
    static AVStream *audio_stream;

    static int pts
           = 0;
    static int last_audio_pts = 0;

    // Buffers for UV format conversion
    static unsigned char *u_buf;
    static unsigned char *v_buf;

    static int enable_audio = 1;
    static int64_t audio_samples_written = 0;
    static int audio_sample_rate = 0;

    // Stupid buffer for audio samples. Not even a proper ring buffer
    #define AUDIO_MAX_BUF_SIZE 16384  // 2x what we get from Java
    static short audio_buf[AUDIO_MAX_BUF_SIZE];
    static int audio_buf_size = 0;

    void AudioBuffer_Push(const short *audio, int num_samples) {
       if (audio_buf_size >= AUDIO_MAX_BUF_SIZE - num_samples) {
           LOGI("AUDIO BUFFER OVERFLOW: %i + %i > %i", audio_buf_size, num_samples,
                AUDIO_MAX_BUF_SIZE);
           return;
       }
       for (int i = 0; i &lt; num_samples; i++) {
           audio_buf[audio_buf_size++] = audio[i];
       }
    }

    int AudioBuffer_Size() { return audio_buf_size; }

    short *AudioBuffer_Get() { return audio_buf; }

    void AudioBuffer_Pop(int num_samples) {
       if (num_samples > audio_buf_size) {
           LOGI("Audio buffer Pop WTF: %i vs %i", num_samples, audio_buf_size);
           return;
       }
       memmove(audio_buf, audio_buf + num_samples, num_samples * sizeof(short));
       audio_buf_size -= num_samples;
    }

    void AudioBuffer_Clear() {
       memset(audio_buf, 0, sizeof(audio_buf));
       audio_buf_size = 0;
    }

    static void log_callback(void *ptr, int level, const char *fmt, va_list vl) {
       char x[2048];
       vsnprintf(x, 2048, fmt, vl);
       LOGI(x);
    }

    JNIEXPORT jboolean
    JNICALL Java_com_example_venrto_ffmpegtest_Ffmpeg_init(JNIEnv *env, jobject thiz,
                                                            jint width, jint height,
                                                            jint audio_sample_rate_param,
                                                            jstring rtmp_url) {
       avcodec_register_all();
       av_register_all();
       av_log_set_callback(log_callback);

       fmt_context = avformat_alloc_context();
       AVOutputFormat *ofmt = av_guess_format("flv", NULL, NULL);
       if (ofmt) {
           LOGI("av_guess_format returned %s", ofmt->long_name);
       } else {
           LOGI("av_guess_format fail");
           return JNI_FALSE;
       }

       fmt_context->oformat = ofmt;
       LOGI("creating video stream");
       video_stream = av_new_stream(fmt_context, 0);

       if (enable_audio) {
           LOGI("creating audio stream");
           audio_stream = av_new_stream(fmt_context, 1);
       }

       // Open Video Codec.
       // ======================
       AVCodec *video_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if (!video_codec) {
           LOGI("Did not find the video codec");
           return JNI_FALSE;  // leak!
       } else {
           LOGI("Video codec found!");
       }
       AVCodecContext *video_codec_ctx = video_stream->codec;
       video_codec_ctx->codec_id = video_codec->id;
       video_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
       video_codec_ctx->level = 31;

       video_codec_ctx->width = width;
       video_codec_ctx->height = height;
       video_codec_ctx->pix_fmt = PIX_FMT_YUV420P;
       video_codec_ctx->rc_max_rate = 0;
       video_codec_ctx->rc_buffer_size = 0;
       video_codec_ctx->gop_size = 12;
       video_codec_ctx->max_b_frames = 0;
       video_codec_ctx->slices = 8;
       video_codec_ctx->b_frame_strategy = 1;
       video_codec_ctx->coder_type = 0;
       video_codec_ctx->me_cmp = 1;
       video_codec_ctx->me_range = 16;
       video_codec_ctx->qmin = 10;
       video_codec_ctx->qmax = 51;
       video_codec_ctx->keyint_min = 25;
       video_codec_ctx->refs = 3;
       video_codec_ctx->trellis = 0;
       video_codec_ctx->scenechange_threshold = 40;
       video_codec_ctx->flags |= CODEC_FLAG_LOOP_FILTER;
       video_codec_ctx->me_method = ME_HEX;
       video_codec_ctx->me_subpel_quality = 6;
       video_codec_ctx->i_quant_factor = 0.71;
       video_codec_ctx->qcompress = 0.6;
       video_codec_ctx->max_qdiff = 4;
       video_codec_ctx->time_base.den = 10;
       video_codec_ctx->time_base.num = 1;
       video_codec_ctx->bit_rate = 3200 * 1000;
       video_codec_ctx->bit_rate_tolerance = 0;
       video_codec_ctx->flags2 |= 0x00000100;

       fmt_context->bit_rate = 4000 * 1000;

       av_opt_set(video_codec_ctx, "partitions", "i8x8,i4x4,p8x8,b8x8", 0);
       av_opt_set_int(video_codec_ctx, "direct-pred", 1, 0);
       av_opt_set_int(video_codec_ctx, "rc-lookahead", 0, 0);
       av_opt_set_int(video_codec_ctx, "fast-pskip", 1, 0);
       av_opt_set_int(video_codec_ctx, "mixed-refs", 1, 0);
       av_opt_set_int(video_codec_ctx, "8x8dct", 0, 0);
       av_opt_set_int(video_codec_ctx, "weightb", 0, 0);

       if (fmt_context->oformat->flags &amp; AVFMT_GLOBALHEADER)
           video_codec_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;

       LOGI("Opening video codec");
       AVDictionary *vopts = NULL;
       av_dict_set(&amp;vopts, "profile", "main", 0);
       //av_dict_set(&amp;vopts, "vprofile", "main", 0);
       av_dict_set(&amp;vopts, "rc-lookahead", 0, 0);
       av_dict_set(&amp;vopts, "tune", "film", 0);
       av_dict_set(&amp;vopts, "preset", "ultrafast", 0);
       av_opt_set(video_codec_ctx->priv_data, "tune", "film", 0);
       av_opt_set(video_codec_ctx->priv_data, "preset", "ultrafast", 0);
       av_opt_set(video_codec_ctx->priv_data, "tune", "film", 0);
       int open_res = avcodec_open2(video_codec_ctx, video_codec, &amp;vopts);
       if (open_res &lt; 0) {
           LOGI("Error opening video codec: %i", open_res);
           return JNI_FALSE;   // leak!
       }

       // Open Audio Codec.
       // ======================

       if (enable_audio) {
           AudioBuffer_Clear();
           audio_sample_rate = audio_sample_rate_param;
           AVCodec *audio_codec = avcodec_find_encoder(AV_CODEC_ID_AAC);
           if (!audio_codec) {
               LOGI("Did not find the audio codec");
               return JNI_FALSE;  // leak!
           } else {
               LOGI("Audio codec found!");
           }
           AVCodecContext *audio_codec_ctx = audio_stream->codec;
           audio_codec_ctx->codec_id = audio_codec->id;
           audio_codec_ctx->codec_type = AVMEDIA_TYPE_AUDIO;
           audio_codec_ctx->bit_rate = 128000;
           audio_codec_ctx->bit_rate_tolerance = 16000;
           audio_codec_ctx->channels = 1;
           audio_codec_ctx->profile = FF_PROFILE_AAC_LOW;
           audio_codec_ctx->sample_fmt = AV_SAMPLE_FMT_FLT;
           audio_codec_ctx->sample_rate = 44100;

           LOGI("Opening audio codec");
           AVDictionary *opts = NULL;
           av_dict_set(&amp;opts, "strict", "experimental", 0);
           open_res = avcodec_open2(audio_codec_ctx, audio_codec, &amp;opts);
           LOGI("audio frame size: %i", audio_codec_ctx->frame_size);

           if (open_res &lt; 0) {
               LOGI("Error opening audio codec: %i", open_res);
               return JNI_FALSE;   // leak!
           }
       }

       const jbyte *url = (*env)->GetStringUTFChars(env, rtmp_url, NULL);

       // Point to an output file
       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           if (avio_open(&amp;fmt_context->pb, url, URL_WRONLY) &lt; 0) {
               LOGI("ERROR: Could not open file %s", url);
               return JNI_FALSE;  // leak!
           }
       }
       (*env)->ReleaseStringUTFChars(env, rtmp_url, url);

       LOGI("Writing output header.");
       // Write file header
       if (avformat_write_header(fmt_context, NULL) != 0) {
           LOGI("ERROR: av_write_header failed");
           return JNI_FALSE;
       }

       pts = 0;
       last_audio_pts = 0;
       audio_samples_written = 0;

       // Initialize buffers for UV format conversion
       int frame_size = video_codec_ctx->width * video_codec_ctx->height;
       u_buf = (unsigned char *) av_malloc(frame_size / 4);
       v_buf = (unsigned char *) av_malloc(frame_size / 4);

       LOGI("ffmpeg encoding init done");
       return JNI_TRUE;
    }

    JNIEXPORT void JNICALL
    Java_com_example_venrto_ffmpegtest_Ffmpeg_shutdown(JNIEnv
                                                        *env,
                                                        jobject thiz
    ) {
       av_write_trailer(fmt_context);
       avio_close(fmt_context
                          ->pb);
       avcodec_close(video_stream
                             ->codec);
       if (enable_audio) {
           avcodec_close(audio_stream
                                 ->codec);
       }
       av_free(fmt_context);
       av_free(u_buf);
       av_free(v_buf);

       fmt_context = NULL;
       u_buf = NULL;
       v_buf = NULL;
    }

    JNIEXPORT jintJNICALL
    Java_com_example_venrto_ffmpegtest_Ffmpeg_encodeVideoFrame(JNIEnv
                                                                *env,
                                                                jobject thiz,
                                                                jbyteArray
                                                                yuv_image) {
       int yuv_length = (*env)->GetArrayLength(env, yuv_image);
       unsigned char *yuv_data = (*env)->GetByteArrayElements(env, yuv_image, 0);

       AVCodecContext *video_codec_ctx = video_stream->codec;
    //LOGI("Yuv size: %i w: %i h: %i", yuv_length, video_codec_ctx->width, video_codec_ctx->height);

       int frame_size = video_codec_ctx->width * video_codec_ctx->height;

       const unsigned char *uv = yuv_data + frame_size;

    // Convert YUV from NV12 to I420. Y channel is the same so we don't touch it,
    // we just have to deinterleave UV.
       for (
               int i = 0;
               i &lt; frame_size / 4; i++) {
           v_buf[i] = uv[i * 2];
           u_buf[i] = uv[i * 2 + 1];
       }

       AVFrame source;
       memset(&amp;source, 0, sizeof(AVFrame));
       source.data[0] =
               yuv_data;
       source.data[1] =
               u_buf;
       source.data[2] =
               v_buf;
       source.linesize[0] = video_codec_ctx->
               width;
       source.linesize[1] = video_codec_ctx->width / 2;
       source.linesize[2] = video_codec_ctx->width / 2;

    // only for bitrate regulation. irrelevant for sync.
       source.
               pts = pts;
       pts++;

       int out_length = frame_size + (frame_size / 2);
       unsigned char *out = (unsigned char *) av_malloc(out_length);
       int compressed_length = avcodec_encode_video(video_codec_ctx, out, out_length, &amp;source);

       (*env)->
               ReleaseByteArrayElements(env, yuv_image, yuv_data,
                                        0);

    // Write to file too
       if (compressed_length > 0) {
           AVPacket pkt;
           av_init_packet(&amp;pkt);
           pkt.
                   pts = last_audio_pts;
           if (video_codec_ctx->coded_frame &amp;&amp; video_codec_ctx->coded_frame->key_frame) {
               pkt.flags |= 0x0001;
           }
           pkt.
                   stream_index = video_stream->index;
           pkt.
                   data = out;
           pkt.
                   size = compressed_length;
           if (
                   av_interleaved_write_frame(fmt_context,
                                              &amp;pkt) != 0) {
               LOGI("Error writing video frame");
           }
       } else {
           LOGI("??? compressed_length &lt;= 0");
       }

       last_audio_pts++;

       av_free(out);
       return
               compressed_length;
    }

    JNIEXPORT jintJNICALL
    Java_com_example_venrto_ffmpegtest_Ffmpeg_encodeAudioFrame(JNIEnv
                                                                *env,
                                                                jobject thiz,
                                                                jshortArray
                                                                audio_data,
                                                                jint length
    ) {
       if (!enable_audio) {
           return 0;
       }

       short *audio = (*env)->GetShortArrayElements(env, audio_data, 0);
    //LOGI("java audio buffer size: %i", length);

       AVCodecContext *audio_codec_ctx = audio_stream->codec;

       unsigned char *out = av_malloc(128000);

       AudioBuffer_Push(audio, length
       );

       int total_compressed = 0;
       while (

               AudioBuffer_Size()

               >= audio_codec_ctx->frame_size) {
           AVPacket pkt;
           av_init_packet(&amp;pkt);

           int compressed_length = avcodec_encode_audio(audio_codec_ctx, out, 128000,
                                                        AudioBuffer_Get());

           total_compressed +=
                   compressed_length;
           audio_samples_written += audio_codec_ctx->
                   frame_size;

           int new_pts = (audio_samples_written * 1000) / audio_sample_rate;
           if (compressed_length > 0) {
               pkt.
                       size = compressed_length;
               pkt.
                       pts = new_pts;
               last_audio_pts = new_pts;
    //LOGI("audio_samples_written: %i  comp_length: %i   pts: %i", (int)audio_samples_written, (int)compressed_length, (int)new_pts);
               pkt.flags |= 0x0001;
               pkt.
                       stream_index = audio_stream->index;
               pkt.
                       data = out;
               if (
                       av_interleaved_write_frame(fmt_context,
                                                  &amp;pkt) != 0) {
                   LOGI("Error writing audio frame");
               }
           }
           AudioBuffer_Pop(audio_codec_ctx
                                   ->frame_size);
       }

       (*env)->
               ReleaseShortArrayElements(env, audio_data, audio,
                                         0);

       av_free(out);
       return
               total_compressed;
    }

    this an example for a header file which i have an error include

    #include "libavutil/samplefmt.h"
    #include "libavutil/avutil.h"
    #include "libavutil/cpu.h"
    #include "libavutil/dict.h"
    #include "libavutil/log.h"
    #include "libavutil/pixfmt.h"
    #include "libavutil/rational.h"

    #include "libavutil/version.h"

    in the libavcodec/avcodec.h

    This code based in this example :
    https://github.com/youtube/yt-watchme