Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (111)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (11664)

  • 4 ways to create more effective funnels

    24 février 2020, par Jake Thornton — Uncategorized

    Accurately measuring the success of your customer’s journey on your website is vital to increasing conversions and having the best outcome for your business. When it comes to website analytics, the Funnels feature is the best place to start measuring each touch point in the customer journey. From here you’ll find out where you lose your visitors so you can make changes to your website and convert more in the future.

    The funnels feature lets you measure the steps (actions, events and pages) your users go through to reach the desired outcomes you want them to achieve. This gives you valuable insights into the desired journey for your customers. 

    When creating a funnel with the funnels feature, you anticipate the customer journey that you want to measure, for example : 

    Step 1 – Visitor lands on your homepage and sees the promotion you’re offering. 
    Step 2 – They click the call-to-action (CTA) button which leads them to information on the product
    Step 3 – They add the product to their cart
    Step 4 – They fill in their personal information and credit card details
    Step 5 – They click the “pay now” button

    From here you can see exactly how many visitors you lose between each step. Then you can implement new techniques to decrease these drop-offs and evaluate the success of your changes over time.

    But what about the non-conventional routes to conversion ?

    That’s right, visitors can end up in all different directions on your website. It’s important to use other features in Matomo to discover these popular pathways your visitors may be taking before the point of conversion.

    Here are 4 Matomo features for discovering important alternative funnels on your website :

    The transitions feature lets you visualise mini funnels on selected pages. You can see how visitors landed on a specific page, and then where they moved on to from this specific page.

    First you need to identify the page(s) that sells your product or service the most. 

    Whether it’s your homepage, a product page or an information page on your services. The transitions feature will then show you the before and after pathways visitors are already taking to get from page to page

    The transitions feature is located under Behaviour – Pages. Find the important page you would like to analyse and click on the Transitions icon.

    In the example above, you’ll see 18% of visitors who entered from internal pages came from the homepage, which you may have already suspected as the first step in your conversion funnel.

    However, the exact same % of visitors are also entering through a blog post article called /best-of-the-best/

    In this case, it highlights the importance of creating funnels with popular blog posts as the first step in the funnel. Your visitors may have found this post through social media, a search engine etc. Whatever the case, your blog posts could be your biggest influencer for conversions on your website.

    >> Learn more about Transitions

    The overlay feature lets you see exactly where visitors are clicking on your landing pages which moves them either in the right or wrong direction in the conversion funnel. 

    If you see a high percentage of clicks to a page that’s off the beaten track from your desired conversion funnel, use the Funnels feature to follow this pathway and analyse how they get back to the pathway you initially intended them to take.

    The best thing about the page overlay feature is the visualisation showing the results on the landing page itself. This gives you an idea of where they may be getting distracted by the wrong content.

    You can locate the page overlay feature beside the transitions feature, shown in the screenshot below.

    The page overlay feature also gives you a summary of the pageviews, clicks, bounce rates, exit rates and average time spent on page, so you can measure the overall success of each page in the display menu.

    >> Learn more about Page Overlay

    If you’re looking to see many of the most popular pathways your visitors are taking all at once, then Users Flow is a powerful feature which shows this visualisation.

    Note : For Matomo On-Premise users, Users Flow is a premium feature. More information here.

    The thicker the blue line between interactions means the more popular the pathway is. 

    Here you can see how visitors are navigating their way through your website before converting, this presenting clear steps in the conversion funnel that require monitoring and improving on to ensure your efforts are going into the right areas on your website.

    >> Learn more about Users Flow

    Another important feature to use which is integrated within the funnels feature, is row evolution which shows you important changes in your user’s behaviour over time.

    Having row evolution integrated within the funnels feature gives you a big advantage as it lets you measure the specific metrics and landing pages within your conversion funnel.

    You’ll be able to see the increases and decreases in entries and exits to your landing page, as well as increases and decreases in the number of visitors who proceed to the next step in the funnel, and the conversion rate %.

    You’ll also be able to add annotations so you can note all the changes you make to your landing pages over time and quickly identify how these changes impacted your conversion funnels.

    >>Learn more about Row Evolution

    Continually create more and more funnels !

    Measuring the success of the desired pathway you want your customers to take is crucial to ensure you are presenting the best possible user experience for your visitors.

    However, creating funnels for the less desired pathways is equally important. This way you’ll discover popular journeys your visitors are taking within your website you weren’t previously aware of, and can monitor them to make sure they still work in the future. You’ll be able to fix pain points easier and find faster ways to get visitors back on the right track to converting.

  • ffmpeg encode video produce incorrect mediainfo encoding settings

    25 août 2021, par Foong

    I've been trying to do batch encode videos to H265 format. I am using media-autobuild_suite to build ffmpeg and have already updating ffmpeg to the latest version.

    


    ffmpeg version N-103367-g5ddb4b6a1b-g88b3e31562+1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 10.3.0 (Rev5, Built by MSYS2 project)
configuration:  --pkg-config=pkgconf --cc='ccache gcc' --cxx='ccache g++' --ld='ccache g++' --disable-autodetect --enable-amf --enable-bzlib --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-iconv --enable-lzma --enable-nvenc --enable-schannel --enable-zlib --enable-sdl2 --enable-ffnvcodec --enable-nvdec --enable-cuda-llvm --enable-gmp --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libdav1d --enable-libaom --disable-debug --enable-libfdk-aac --extra-libs=-liconv --enable-gpl --enable-version3 --enable-nonfree
libavutil      57.  4.101 / 57.  4.101
libavcodec     59.  5.101 / 59.  5.101
libavformat    59.  4.102 / 59.  4.102
libavdevice    59.  0.101 / 59.  0.101
libavfilter     8.  3.100 /  8.  3.100
libswscale      6.  0.100 /  6.  0.100
libswresample   4.  0.100 /  4.  0.100
libpostproc    56.  0.100 / 56.  0.100


    


    However, the Writing library and Encoding settings output is incorrect. Before update ffmpeg doesn't have this issue. I wonder what have been missing that causing this issue.

    


    encoding CLI :

    


    ffmpeg -y -hide_banner -loglevel error -stats -hwaccel dxva2 -i "input.mkv" -c:v libx265 -vsync cfr -pix_fmt yuv420p10le -preset fast -tune animation -x265-params ctu=32:min-cu-size=8:max-tu-size=16:tu-intra-depth=2:tu-inter-depth=2:me=1:subme=3:merange=44:max-merge=2:keyint=250:min-keyint=23:rc-lookahead=60:lookahead-slices=6:bframes=6:bframe-bias=0:b-adapt=2:ref=6:limit-refs=3:limit-tu=1:aq-mode=3:aq-strength=0.6:rd=3:psy-rd=1.00:psy-rdoq=1.50:rdoq-level=1:deblock=-1,-1:crf=21.0:qblur=0.50:qcomp=0.60:qpmin=0:qpmax=51:frame-threads=1:strong-intra-smoothing=1:no-lossless=1:no-cu-lossless=1:constrained-intra=1:no-fast-intra=1:no-open-gop=1:no-temporal-layers=1:no-limit-modes=1:weightp=1:no-weightb=1:no-analyze-src-pics=1:no-rd-refine=1:signhide=1:sao=1:no-sao-non-deblock=1:b-pyramid=1:no-cutree=1:no-intra-refresh=1:no-amp=1:temporal-mvp=1:no-early-skip=1:no-tskip=1:no-tskip-fast=1:no-deblock=1:no-b-intra=1:no-splitrd-skip=1:no-strict-cbr=1:no-rc-grain=1:no-const-vbv=1:no-opt-qp-pps=1:no-opt-ref-list-length-pps=1:no-multi-pass-opt-rps=1:no-opt-cu-delta-qp=1:no-hdr=1:no-hdr-opt=1:no-dhdr10-opt=1:no-idr-recovery-sei=1:no-limit-sao=1:no-lowpass-dct=1:no-dynamic-refine=1:no-single-sei=1 -c:a libfdk_aac -vf "fps=fps=29.970,setdar=16/9,scale=960:540:flags=lanczos" -map 0:v:? -map 0:a:? -map_metadata:g -1 -map_chapters 0 "output.mkv"


    


    Input video info :

    


    Video
ID                          : 1
Format                      : AVC
Format/Info                 : Advanced Video Codec
Format profile              : High@L3
Format settings             : CABAC / 5 Ref Frames
Format settings, CABAC      : Yes
Format settings, Reference  : 5 frames
Codec ID                    : V_MPEG4/ISO/AVC
Bit rate                    : 1 595 kb/s
Nominal bit rate            : 2 030 kb/s
Width                       : 720 pixels
Height                      : 480 pixels
Display aspect ratio        : 16:9
Original display aspect rat : 3:2
Frame rate mode             : Variable
Original frame rate         : 29.970 FPS
Color space                 : YUV
Chroma subsampling          : 4:2:0
Bit depth                   : 8 bits
Scan type                   : Progressive
Bits/(Pixel*Frame)          : 0.196
Writing library             : x264 core 66 r1115M 11863ac
Encoding settings           : cabac=1 / ref=5 / deblock=1:1:1 / analyse=0x3:0x133 / me=esa / subme=7 / psy_rd=1.0:0.0 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=2 / deadzone=21,11 / chroma_qp_offset=-2 / threads=1 / nr=0 / decimate=0 / mbaff=0 / bframes=1 / b_pyramid=0 / b_adapt=1 / b_bias=0 / direct=3 / wpredb=1 / keyint=250 / keyint_min=25 / scenecut=40 / rc=2pass / bitrate=2030 / ratetol=1.0 / qcomp=0.60 / qpmin=10 / qpmax=51 / qpstep=4 / cplxblur=20.0 / qblur=0.5 / ip_ratio=1.40 / pb_ratio=1.30 / aq=1:1.00
Default                     : Yes
Forced                      : No


    


    Output video info :

    


    Video
ID                          : 1
Format                      : HEVC
Format/Info                 : High Efficiency Video Coding
Format profile              : Main 10@L3.1@Main
Codec ID                    : V_MPEGH/ISO/HEVC
Duration                    : 23 min 29 s
Width                       : 960 pixels
Height                      : 540 pixels
Display aspect ratio        : 16:9
Frame rate mode             : Constant
Frame rate                  : 29.970 (29970/1000) FPS
Color space                 : YUV
Chroma subsampling          : 4:2:0
Bit depth                   : 10 bits
Writing library             : Lavc59.5.100 libx265
Encoding settings           : cabac=1 / ref=5 / deblock=1:1:1 / analyse=0x3:0x133 / me=esa / subme=7 / psy_rd=1.0:0.0 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / cqm=2 / deadzone=21,11 / chroma_qp_offset=-2 / threads=1 / nr=0 / decimate=0 / mbaff=0 / bframes=1 / b_pyramid=0 / b_adapt=1 / b_bias=0 / direct=3 / wpredb=1 / keyint=250 / keyint_min=25 / scenecut=40 / rc=2pass / bitrate=2030 / ratetol=1.0 / qcomp=0.60 / qpmin=10 / qpmax=51 / qpstep=4 / cplxblur=20.0 / qblur=0.5 / ip_ratio=1.40 / pb_ratio=1.30 / aq=1:1.00
Default                     : Yes
Forced                      : No
Color range                 : Limited


    


    Expecting (from previous encoded videos) :

    


    Video
ID                          : 1
Format                      : HEVC
Format/Info                 : High Efficiency Video Coding
Format profile              : Main 10@L3.1@Main
Codec ID                    : V_MPEGH/ISO/HEVC
Duration                    : 24 min 37 s
Bit rate                    : 833 kb/s
Width                       : 960 pixels
Height                      : 720 pixels
Display aspect ratio        : 4:3
Frame rate mode             : Constant
Frame rate                  : 23.976 (23976/1000) FPS
Color space                 : YUV
Chroma subsampling          : 4:2:0
Bit depth                   : 10 bits
Bits/(Pixel*Frame)          : 0.050
Stream size                 : 147 MiB
Title                       : HEVC
Writing library             : x265 3.4+28-419182243:[Windows][GCC 10.2.0][64 bit] 10bit
Encoding settings           : cpuid=1111039 / frame-threads=3 / numa-pools=12 / wpp / no-pmode / no-pme / no-psnr / no-ssim / log-level=2 / input-csp=1 / input-res=960x720 / interlace=0 / total-frames=0 / level-idc=0 / high-tier=1 / uhd-bd=0 / ref=5 / no-allow-non-conformance / no-repeat-headers / annexb / no-aud / no-hrd / info / hash=0 / no-temporal-layers / no-open-gop / min-keyint=1 / keyint=360 / gop-lookahead=0 / bframes=4 / b-adapt=2 / b-pyramid / bframe-bias=0 / rc-lookahead=20 / lookahead-slices=4 / scenecut=40 / hist-scenecut=0 / radl=0 / no-splice / no-intra-refresh / ctu=64 / min-cu-size=8 / no-rect / no-amp / max-tu-size=32 / tu-inter-depth=1 / tu-intra-depth=1 / limit-tu=0 / rdoq-level=0 / dynamic-rd=0.00 / no-ssim-rd / signhide / no-tskip / nr-intra=0 / nr-inter=0 / no-constrained-intra / strong-intra-smoothing / max-merge=2 / limit-refs=3 / no-limit-modes / me=1 / subme=3 / merange=16 / temporal-mvp / no-frame-dup / no-hme / weightp / no-weightb / no-analyze-src-pics / no-deblock / sao / no-sao-non-deblock / rd=3 / selective-sao=4 / no-early-skip / rskip / no-fast-intra / no-tskip-fast / no-cu-lossless / no-b-intra / no-splitrd-skip / rdpenalty=0 / psy-rd=0.20 / psy-rdoq=0.00 / no-rd-refine / no-lossless / cbqpoffs=0 / crqpoffs=0 / rc=crf / crf=26.0 / qcomp=0.60 / qpstep=4 / stats-write=0 / stats-read=0 / ipratio=1.40 / pbratio=1.00 / aq-mode=0 / aq-strength=0.00 / no-cutree / zone-count=0 / no-strict-cbr / qg-size=64 / no-rc-grain / qpmax=51 / qpmin=0 / no-const-vbv / sar=1 / overscan=0 / videoformat=5 / range=0 / colorprim=2 / transfer=2 / colormatrix=2 / chromaloc=0 / display-window=0 / cll=0,0 / min-luma=0 / max-luma=1023 / log2-max-poc-lsb=8 / vui-timing-info / vui-hrd-info / slices=1 / no-opt-qp-pps / no-opt-ref-list-length-pps / no-multi-pass-opt-rps / scenecut-bias=0.05 / hist-threshold=0.03 / no-opt-cu-delta-qp / no-aq-motion / no-hdr10 / no-hdr10-opt / no-dhdr10-opt / no-idr-recovery-sei / analysis-reuse-level=0 / analysis-save-reuse-level=0 / analysis-load-reuse-level=0 / scale-factor=0 / refine-intra=0 / refine-inter=0 / refine-mv=1 / refine-ctu-distortion=0 / no-limit-sao / ctu-info=0 / no-lowpass-dct / refine-analysis-type=0 / copy-pic=1 / max-ausize-factor=1.0 / no-dynamic-refine / no-single-sei / no-hevc-aq / no-svt / no-field / qp-adaptation-range=1.00 / no-scenecut-aware-qpconformance-window-offsets / right=0 / bottom=0 / decoder-max-rate=0 / no-vbv-live-multi-pass
Language                    : English
Default                     : Yes
Forced                      : No
Color range                 : Limited


    


    I have tried with simplest ffmpeg from official wedsite and encoding cli but the output still same.

    


    ffmpeg -i "input.mkv" -c:v libx265 "output.mkv"


    


    Update :
Tried converting video with FFmpeg 4.4 "Rao" from https://www.ffmpeg.org/ working fine. But compiled ffmpeg from media-autobuild_suite, tried with light build and full build, Writing library and Encoding settings output still incorrect.

    


  • Muxing Android MediaCodec encoded H264 packets into RTMP

    31 décembre 2015, par Vadym

    I am coming from a thread Encoding H.264 from camera with Android MediaCodec. My setup is very similar. However, I attempt to write mux the encoded frames and with javacv and broadcast them via rtmp.

    RtmpClient.java

    ...
    private volatile BlockingQueue mFrameQueue = new LinkedBlockingQueue(MAXIMUM_VIDEO_FRAME_BACKLOG);
    ...
    private void startStream() throws FrameRecorder.Exception, IOException {
       if (TextUtils.isEmpty(mDestination)) {
           throw new IllegalStateException("Cannot start RtmpClient without destination");
       }

       if (mCamera == null) {
           throw new IllegalStateException("Cannot start RtmpClient without camera.");
       }

       Camera.Parameters cameraParams = mCamera.getParameters();

       mRecorder = new FFmpegFrameRecorder(
               mDestination,
               mVideoQuality.resX,
               mVideoQuality.resY,
               (mAudioQuality.channelType.equals(AudioQuality.CHANNEL_TYPE_STEREO) ? 2 : 1));

       mRecorder.setFormat("flv");

       mRecorder.setFrameRate(mVideoQuality.frameRate);
       mRecorder.setVideoBitrate(mVideoQuality.bitRate);
       mRecorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);

       mRecorder.setSampleRate(mAudioQuality.samplingRate);
       mRecorder.setAudioBitrate(mAudioQuality.bitRate);
       mRecorder.setAudioCodec(avcodec.AV_CODEC_ID_AAC);

       mVideoStream = new VideoStream(mRecorder, mVideoQuality, mFrameQueue, mCamera);
       mAudioStream = new AudioStream(mRecorder, mAudioQuality);

       mRecorder.start();

       // Setup a bufferred preview callback
       setupCameraCallback(mCamera, mRtmpClient, DEFAULT_PREVIEW_CALLBACK_BUFFERS,
               mVideoQuality.resX * mVideoQuality.resY * ImageFormat.getBitsPerPixel(
                       cameraParams.getPreviewFormat())/8);

       try {
           mVideoStream.start();
           mAudioStream.start();
       }
       catch(Exception e) {
           e.printStackTrace();
           stopStream();
       }
    }
    ...
    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
       boolean frameQueued = false;

       if (mRecorder == null || data == null) {
           return;
       }

       frameQueued = mFrameQueue.offer(data);

       // return the buffer to be reused - done in videostream
       //camera.addCallbackBuffer(data);
    }
    ...

    VideoStream.java

    ...
    @Override
    public void run() {
       try {
           mMediaCodec = MediaCodec.createEncoderByType("video/avc");
           MediaFormat mediaFormat = MediaFormat.createVideoFormat("video/avc", mVideoQuality.resX, mVideoQuality.resY);
           mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, mVideoQuality.bitRate);
           mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, mVideoQuality.frameRate);
           mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
           mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1);
           mMediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
           mMediaCodec.start();
       }
       catch(IOException e) {
           e.printStackTrace();
       }

       long startTimestamp = System.currentTimeMillis();
       long frameTimestamp = 0;
       byte[] rawFrame = null;

       try {
           while (!Thread.interrupted()) {
               rawFrame = mFrameQueue.take();

               frameTimestamp = 1000 * (System.currentTimeMillis() - startTimestamp);

               encodeFrame(rawFrame, frameTimestamp);

               // return the buffer to be reused
               mCamera.addCallbackBuffer(rawFrame);
           }
       }
       catch (InterruptedException ignore) {
           // ignore interrup while waiting
       }

       // Clean up video stream allocations
       try {
           mMediaCodec.stop();
           mMediaCodec.release();
           mOutputStream.flush();
           mOutputStream.close();
       } catch (Exception e){
           e.printStackTrace();
       }
    }
    ...
    private void encodeFrame(byte[] input, long timestamp) {
       try {
           ByteBuffer[] inputBuffers = mMediaCodec.getInputBuffers();
           ByteBuffer[] outputBuffers = mMediaCodec.getOutputBuffers();

           int inputBufferIndex = mMediaCodec.dequeueInputBuffer(0);

           if (inputBufferIndex >= 0) {
               ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
               inputBuffer.clear();
               inputBuffer.put(input);
               mMediaCodec.queueInputBuffer(inputBufferIndex, 0, input.length, timestamp, 0);
           }

           MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();

           int outputBufferIndex = mMediaCodec.dequeueOutputBuffer(bufferInfo, 0);

           if (outputBufferIndex >= 0) {
               while (outputBufferIndex >= 0) {
                   ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];

                   // Should this be a direct byte buffer?
                   byte[] outData = new byte[bufferInfo.size - bufferInfo.offset];
                   outputBuffer.get(outData);

                   mFrameRecorder.record(outData, bufferInfo.offset, outData.length, timestamp);

                   mMediaCodec.releaseOutputBuffer(outputBufferIndex, false);
                   outputBufferIndex = mMediaCodec.dequeueOutputBuffer(bufferInfo, 0);
               }
           }
           else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
               outputBuffers = mMediaCodec.getOutputBuffers();
           } else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
               // ignore for now
           }
       } catch (Throwable t) {
           t.printStackTrace();
       }

    }
    ...

    FFmpegFrameRecorder.java

    ...
    // Hackish codec copy frame recording function
    public boolean record(byte[] encodedData, int offset, int length, long frameCount) throws Exception {
       int ret;

       if (encodedData == null) {
           return false;
       }

       av_init_packet(video_pkt);

       // this is why i wondered whether I should get outputbuffer data into direct byte buffer
       video_outbuf.put(encodedData, 0, encodedData.length);

       video_pkt.data(video_outbuf);
       video_pkt.size(video_outbuf_size);

       video_pkt.pts(frameCount);
       video_pkt.dts(frameCount);

       video_pkt.stream_index(video_st.index());

       synchronized (oc) {
           /* write the compressed frame in the media file */
           if (interleaved && audio_st != null) {
               if ((ret = av_interleaved_write_frame(oc, video_pkt)) < 0) {
                   throw new Exception("av_interleaved_write_frame() error " + ret + " while writing interleaved video frame.");
               }
           } else {
               if ((ret = av_write_frame(oc, video_pkt)) < 0) {
                   throw new Exception("av_write_frame() error " + ret + " while writing video frame.");
               }
           }
       }
       return (video_pkt.flags() & AV_PKT_FLAG_KEY) == 1;
    }
    ...

    When I try to stream the video and run ffprobe on it, I get the following output :

    ffprobe version 2.5.3 Copyright (c) 2007-2015 the FFmpeg developers
     built on Jan 19 2015 12:56:57 with gcc 4.1.2 (GCC) 20080704 (Red Hat 4.1.2-55)
     configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --disable-crystalhd --enable-libass --enable-libdc1394 --enable-libfaac --enable-nonfree --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopencv --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --enable-libcaca --shlibdir=/usr/lib64 --enable-runtime-cpudetect
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Metadata:
     Server                NGINX RTMP (github.com/arut/nginx-rtmp-module)
     width                 320.00
     height                240.00
     displayWidth          320.00
     displayHeight         240.00
     duration              0.00
     framerate             0.00
     fps                   0.00
     videodatarate         261.00
     videocodecid          7.00
     audiodatarate         62.00
     audiocodecid          10.00
     profile
     level
    [live_flv @ 0x1edb0820] Could not find codec parameters for stream 0 (Video: none, none, 267 kb/s): unknown codec
    Consider increasing the value for the 'analyzeduration' and 'probesize' options
    Input #0, live_flv, from 'rtmp://<server>/input/<stream>':
     Metadata:
       Server          : NGINX RTMP (github.com/arut/nginx-rtmp-module)
       displayWidth    : 320
       displayHeight   : 240
       fps             : 0
       profile         :
       level           :
     Duration: 00:00:00.00, start: 16.768000, bitrate: N/A
       Stream #0:0: Video: none, none, 267 kb/s, 1k tbr, 1k tbn, 1k tbc
       Stream #0:1: Audio: aac (LC), 16000 Hz, mono, fltp, 63 kb/s
    Unsupported codec with id 0 for input stream 0
    </stream></server>

    I am not, by any means, an expert in H264 or video encoding. I know that the encoded frames that come out from MediaCodec contain SPS NAL, PPS NAL, and frame NAL units. I’ve also written the MediaCodec output into a file and was able to play it back (I did have to specify the format and framerate as otherwise it would play too fast).

    My assumption is that things should work (see how little I know :)). Knowing that SPS and PPS are written out, decoder should know enough. Yet, ffprobe fails to recognize codec, fps, and other video information. Do I need to pass packet flag information to FFmpegFrameRecorder.java:record() function ? Or should I use direct buffer ? Any suggestion will be appreciated ! I should figure things out with a hint.

    PS : I know that some codecs use Planar and other SemiPlanar color formats. That distinction will come later if I get past this. Also, I didn’t go the Surface to MediaCodec way because I need to support API 17 and it requires more changes than this route, which I think helps me understand the more basic flow. Agan, I appreciate any suggestions. Please let me know if something needs to be clarified.

    Update #1

    So having done more testing, I see that my encoder outputs the following frames :

    000000016742800DDA0507E806D0A1350000000168CE06E2
    0000000165B840A6F1E7F0EA24000AE73BEB5F51CC7000233A84240...
    0000000141E2031364E387FD4F9BB3D67F51CC7000279B9F9CFE811...
    0000000141E40304423FFFFF0B7867F89FAFFFFFFFFFFCBE8EF25E6...
    0000000141E602899A3512EF8AEAD1379F0650CC3F905131504F839...
    ...

    The very first frame contains SPS and PPS. From what I was able to see, these are transmitted only once. The rest are NAL types 1 and 5. So, my assumption is that, for ffprobe to see stream info not only when the stream starts, I should capture SPS and PPS frames and re-transmit them myself periodically, after a certain number of frames, or perhaps before every I-frame. What do you think ?

    Update #2

    Unable to validate that I’m writing frames successfully. After having tried to read back the written packet, I cannot validate written bytes. As strange, on successful write of IPL image and streaming, I also cannot print out bytes of encoded packet after avcodec_encode_video2. Hit the official dead end.