Recherche avancée

Médias (0)

Mot : - Tags -/content

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (46)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (4948)

  • FFserver : cannot connect via rtsp

    27 avril 2016, par newfoundstory

    so im currrently trying to stream my windows desktop using ffmpeg into a raspberry pi running ffserver.
    The client software im using needs to use RTSP, however i cannot connect to the stream no matter what i try.
    I even used VLC in the messages it just says it cannot connect to the stream.
    Any help would be greatly appreciated !
    Im attempting to access the stream with rtsp :// 169.254.70.227 :8544/test.flv, as soon as i do it stops the ffmpeg feed

    FFserver conf

    `RTSPPort 8544
    HTTPPort 8090                      # Port to bind the server to
    HTTPBindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 10000             # Maximum bandwidth per client
                                  # set this high enough to exceed stream bitrate
    CustomLog -                       # Remove this if you want FFserver to daemoni$

    <feed>               # This is the input feed where FFmpeg will send
      File ./feed1.ffm            # video stream.
      FileMaxSize 100000K              # Maximum file size for buffering video
      ACL allow 192 .168.0.8
      ACL allow 192 .168.0.17
      ACL allow 169 .254.70.227
      ACL allow 169 .254.9.29
      ACL allow 169 .254.165.231
      ACL allow 10 .14.2.197
      ACL allow 192 .168.0.13
      ACL allow 10 .14.2.197
      ACL allow 192.16 8.0.13
      ACL allow 192.1 68.1.3
      ACL allow 192. 168.1.4
      ACL allow 192 .168.1.2
    </feed>

    <stream>                       # Output stream URL definition
      Format rtp
      Feed feed1.ffm
    NoAudio

      # Video settings
      VideoCodec libx264
      VideoSize 720x576           # Video resolution
    VideoBufferSize 2000

    VideoFrameRate 30           # Video FPS
       # Parameters passed to encoder
      AVOptionVideo qmin 10
      AVOptionVideo qmax 42

      PreRoll 15
      StartSendOnKey
    MulticastAddress 224 .124.0.1
    MulticastPort 5000
    MulticastTTL 16
      VideoBitRate 450            # Video bitrate
    </stream>

    <stream>                    # Server status URL
      Format status
      # Only allow local people to get the status
      ACL allow localhost
     # Only allow local people to get the status
      ACL allow localhost
      ACL allow 192.168. 0.0 192.168. 255.255
    </stream>

    <redirect html="html">    # Just an URL redirect for index
      # Redirect index.html to the appropriate site
      URL http   ://www.  ffmpeg .org/
    </redirect>`

    FFmpeg feed

    ffmpeg -rtbufsize 2100M -f dshow -r 29.970 -i video=screen-capture-recorder -c video=screen-capture-recorder.flv http  ://  169 .254.70.227:8090/ feed1.ffm

    FFserver output

     configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree
     libavutil      55. 22.101 / 55. 22.101
     libavcodec     57. 35.100 / 57. 35.100
     libavformat    57. 34.103 / 57. 34.103
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 44.100 /  6. 44.100
     libswscale      4.  1.100 /  4.  1.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    /etc/ffserver.conf:48: Setting default value for video bit rate tolerance = 112500. Use NoDefaults to disable it.
    /etc/ffserver.conf:48: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
    /etc/ffserver.conf:48: Setting default value for video max rate = 20744848. Use NoDefaults to disable it.
    Wed Apr 27 10:33:46 2016 FFserver started.
    Wed Apr 27 10:33:46 2016 224.124.0.1:5000 - - "PLAY test.flv/streamid=0 RTP/MCAST"
    Wed Apr 27 10:33:46 2016 [rtp @ 0x13d4660]Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
    Wed Apr 27 10:33:49 2016 169.254.165.231 - - [GET] "/feed1. ffm HTTP/1.1" 200 4175

    FFMpeg output

    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    [swscaler @ 000000000252f5e0] Warning: data is not aligned! This can lead to a speedloss
    av_interleaved_write_frame(): Unknown errortime=00:00:05.00 bitrate= 249.0kbits/s speed=0.393x
    Error writing trailer of http:  //169. 254.70.227:8090/feed1.ffm: Error number -10053 occurredframe=  204 fps= 15 q=26.0 Lsize=     164kB time=00:00:05.03 bitrate= 266.9kbits/s speed=0.365x
    video:155kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 5.475512%
    [libx264 @ 00000000025187e0] frame I:1     Avg QP:34.24  size: 32151
    [libx264 @ 00000000025187e0] frame P:59    Avg QP:27.14  size:  1807
    [libx264 @ 00000000025187e0] frame B:144   Avg QP:32.16  size:   168
    [libx264 @ 00000000025187e0] consecutive B-frames:  4.9%  2.0%  2.9% 90.2%
    [libx264 @ 00000000025187e0] mb I  I16..4: 26.0% 23.1% 50.9%
    [libx264 @ 00000000025187e0] mb P  I16..4:  1.9%  1.6%  1.1%  P16..4:  4.3%  0.6%  0.4%  0.0%  0.0%    skip:90.2%
    [libx264 @ 00000000025187e0] mb B  I16..4:  0.2%  0.1%  0.1%  B16..8:  3.1%  0.1%  0.0%  direct: 0.1%  skip:96.3%  L0:26.0% L1:73.5% BI: 0.5%
    [libx264 @ 00000000025187e0] final ratefactor: 24.13
    [libx264 @ 00000000025187e0] 8x8 transform intra:31.8% inter:47.1%
    [libx264 @ 00000000025187e0] coded y,u,v intra: 28.1% 8.2% 6.3% inter: 0.6% 0.2% 0.1%
    [libx264 @ 00000000025187e0] i16 v,h,dc,p: 30% 63%  6%  1%
    [libx264 @ 00000000025187e0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 16% 63%  1%  0%  0%  1%  0%  3%
    [libx264 @ 00000000025187e0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 39% 15%  2%  2%  3%  4%  3%  4%
    [libx264 @ 00000000025187e0] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 00000000025187e0] ref P L0: 66.3% 13.1% 16.9%  3.6%
    [libx264 @ 00000000025187e0] ref B L0: 66.6% 29.7%  3.8%
    [libx264 @ 00000000025187e0] ref B L1: 91.7%  8.3%
    [libx264 @ 00000000025187e0] kb/s:191.78
    Conversion failed!
  • Upload to S3 bucket from FFMpegCore

    18 avril 2022, par user1765862

    I'm using FFMpegCore to create image from the video on 5th second.

    &#xA;

    var inputFile = "images/preview_video.mp4";&#xA;var processedFile = "path-to-s3-bucket";&#xA;await FFMpeg.SnapshotAsync(inputFile, processedFile, new Size(800, 600), TimeSpan.FromMilliseconds(5000));&#xA;

    &#xA;

    How can upload this processed file (image) to my s3 bucket using FFMPegCore Snapshot ?

    &#xA;

  • FFMPEG convert NV12 format to NV12 with the same height and width

    7 septembre 2022, par Chun Wang

    I want to use FFmpeg4.2.2 to convert the input NV12 format to output NV12 format with the same height and width. I used sws_scale conversion, but the output frame's colors are all green.

    &#xA;

    P.S. It seems no need to use swscale to get the same width,same height and same format frame,but it is neccessary in my project for dealing with other frames.

    &#xA;

    I have successfully converted the input NV12 format to output NV12 format with the different height and width, the output frame's colors were right.But I FAILED to convert NV12 to NV12 with the same height and width. It was so weird, I couldn't know why :(

    &#xA;

    I want to know what the reason is and what I should do.&#xA;The following is my code.swsCtx4 was used for converting NV12 format to output NV12 format. Others were used for other formats converted test.&#xA;Thank you for you help 

    &#xA;

    //the main code is    &#xA;    AVFrame* frame_nv12 = av_frame_alloc();&#xA;    frame_nv12->width = in_width;&#xA;    frame_nv12->height = in_height;&#xA;    frame_nv12->format = AV_PIX_FMT_NV12;&#xA;    uint8_t* frame_buffer_nv12 = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_NV12, in_width, in_height , 1));&#xA;    av_image_fill_arrays(frame_nv12->data, frame_nv12->linesize, frame_buffer_nv12, AV_PIX_FMT_NV12, in_width, in_height, 1);&#xA;&#xA;&#xA;    AVFrame* frame2_nv12 = av_frame_alloc();&#xA;    frame2_nv12->width = in_width1;&#xA;    frame2_nv12->height = in_height1;&#xA;    frame2_nv12->format = AV_PIX_FMT_NV12;&#xA;&#xA;    uint8_t* frame2_buffer_nv12 = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_NV12, in_width1, in_height1, 1));&#xA;    av_image_fill_arrays(frame2_nv12->data, frame2_nv12->linesize, frame2_buffer_nv12, AV_PIX_FMT_NV12, in_width1, in_height1, 1);&#xA; &#xA;    SwsContext* swsCtx4 = nullptr;&#xA;    swsCtx4 = sws_getContext(in_width, in_height, AV_PIX_FMT_NV12, in_width1, in_height1, AV_PIX_FMT_NV12,&#xA;        SWS_BILINEAR | SWS_PRINT_INFO, NULL, NULL, NULL);&#xA;    printf("swsCtx4\n");&#xA; &#xA;    ret = sws_scale(swsCtx4, frame_nv12->data, frame_nv12->linesize, 0, frame_nv12->height, frame2_nv12->data, frame2_nv12->linesize);&#xA;        if (ret &lt; 0) {&#xA;            printf("sws_4scale failed\n");&#xA;        }&#xA; &#xA;

    &#xA;

    //the complete code&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;}&#xA;#include <seeker></seeker>loggerApi.h>&#xA;#include "seeker/common.h"&#xA;#include <iostream>&#xA;&#xA;//解决原因:pts设置为0,dts设置为0&#xA;#define FILE_SRC "testPicFilter.yuv" //源文件&#xA;#define FILE_DES "test11.yuv" //源文件&#xA;&#xA;int count = 0;&#xA;&#xA;&#xA;int main(int argc, char* argv[])&#xA;{&#xA;    av_register_all();&#xA;&#xA;    int ret = 0;&#xA;    &#xA;    //std::this_thread::sleep_for(std::chrono::milliseconds(5000));&#xA;    int count1 = 1;&#xA;    int piccount;&#xA;    int align = 1;&#xA;&#xA;&#xA;    /*打开输入yuv文件*/&#xA;    FILE* fp_in = fopen(FILE_SRC, "rb&#x2B;");&#xA;    if (fp_in == NULL)&#xA;    {&#xA;        printf("文件打开失败\n");&#xA;        return 0;&#xA;    }&#xA;    int in_width = 640;&#xA;    int in_height = 360;&#xA;    int in_width1 = 640;&#xA;    int in_height1 = 360;&#xA;    &#xA;&#xA;&#xA;    /*处理后的文件*/&#xA;    FILE* fp_out = fopen(FILE_DES, "wb&#x2B;");&#xA;    if (fp_out == NULL)&#xA;    {&#xA;        printf("文件创建失败\n");&#xA;        return 0;&#xA;    }&#xA;    char buff[50];&#xA;&#xA;    AVFrame* frame_in = av_frame_alloc();&#xA;    unsigned char* frame_buffer_in;&#xA;    frame_buffer_in = (unsigned char*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, in_width, in_height, 1));&#xA;    /*根据图像设置图像指针和内存对齐方式*/&#xA;    av_image_fill_arrays(frame_in->data, frame_in->linesize, frame_buffer_in, AV_PIX_FMT_YUV420P, in_width, in_height, 1);&#xA;&#xA;    frame_in->width = in_width;&#xA;    frame_in->height = in_height;&#xA;    frame_in->format = AV_PIX_FMT_YUV420P;&#xA;&#xA;&#xA;    //输入yuv转成frame_nv12&#xA;    AVFrame* frame_nv12 = av_frame_alloc();&#xA;    frame_nv12->width = in_width;&#xA;    frame_nv12->height = in_height;&#xA;    frame_nv12->format = AV_PIX_FMT_NV12;&#xA;    uint8_t* frame_buffer_nv12 = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_NV12, in_width, in_height , 1));&#xA;    av_image_fill_arrays(frame_nv12->data, frame_nv12->linesize, frame_buffer_nv12, AV_PIX_FMT_NV12, in_width, in_height, 1);&#xA;&#xA;&#xA;    AVFrame* frame2_nv12 = av_frame_alloc();&#xA;    frame2_nv12->width = in_width1;&#xA;    frame2_nv12->height = in_height1;&#xA;    frame2_nv12->format = AV_PIX_FMT_NV12;&#xA;&#xA;    uint8_t* frame2_buffer_nv12 = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_NV12, in_width1, in_height1, 1));&#xA;    av_image_fill_arrays(frame2_nv12->data, frame2_nv12->linesize, frame2_buffer_nv12, AV_PIX_FMT_NV12, in_width1, in_height1, 1);&#xA;&#xA;&#xA;    &#xA;    //输入rgb转成yuv&#xA;    AVFrame* frame_yuv = av_frame_alloc();&#xA;    frame_yuv->width = in_width;&#xA;    frame_yuv->height = in_height;&#xA;    frame_yuv->format = AV_PIX_FMT_YUV420P;&#xA;    uint8_t* frame_buffer_yuv = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, in_width, in_height, 1));&#xA;    av_image_fill_arrays(frame_yuv->data, frame_yuv->linesize, frame_buffer_yuv,&#xA;        AV_PIX_FMT_YUV420P, in_width, in_height, 1);&#xA;&#xA;&#xA;&#xA;    SwsContext* swsCtx = nullptr;&#xA;    swsCtx = sws_getContext(in_width, in_height, AV_PIX_FMT_YUV420P, in_width, in_height, AV_PIX_FMT_NV12,&#xA;        SWS_BILINEAR | SWS_PRINT_INFO, NULL, NULL, NULL);&#xA;    printf("swsCtx\n");&#xA;&#xA;    SwsContext* swsCtx4 = nullptr;&#xA;    swsCtx4 = sws_getContext(in_width, in_height, AV_PIX_FMT_NV12, in_width1, in_height1, AV_PIX_FMT_NV12,&#xA;        SWS_BILINEAR | SWS_PRINT_INFO, NULL, NULL, NULL);&#xA;    printf("swsCtx4\n");&#xA;&#xA;    &#xA;    SwsContext* swsCtx2 = nullptr;&#xA;    swsCtx2 = sws_getContext(in_width1, in_height1, AV_PIX_FMT_NV12, in_width, in_height, AV_PIX_FMT_YUV420P,&#xA;        SWS_BILINEAR | SWS_PRINT_INFO, NULL, NULL, NULL);&#xA;    printf("swsCtx2\n");&#xA;&#xA;&#xA;&#xA;&#xA;&#xA;    while (1)&#xA;    {&#xA;&#xA;&#xA;        count&#x2B;&#x2B;;&#xA;&#xA;        if (fread(frame_buffer_in, 1, in_width * in_height * 3 / 2, fp_in) != in_width * in_height * 3 / 2)&#xA;        {&#xA;            break;&#xA;        }&#xA;&#xA;        frame_in->data[0] = frame_buffer_in;&#xA;        frame_in->data[1] = frame_buffer_in &#x2B; in_width * in_height;&#xA;        frame_in->data[2] = frame_buffer_in &#x2B; in_width * in_height * 5 / 4;&#xA;&#xA;&#xA;            //转NV12格式&#xA;        int ret = sws_scale(swsCtx, frame_in->data, frame_in->linesize, 0, frame_in->height, frame_nv12->data, frame_nv12->linesize);&#xA;        if (ret &lt; 0) {&#xA;            printf("sws_scale swsCtx failed\n");&#xA;        }&#xA;&#xA;&#xA;        ret = sws_scale(swsCtx4, frame_nv12->data, frame_nv12->linesize, 0, frame_nv12->height, frame2_nv12->data, frame2_nv12->linesize);&#xA;        if (ret &lt; 0) {&#xA;            printf("sws_scale  swsCtx4 failed\n");&#xA;        }&#xA;        &#xA;&#xA;        if (ret > 0) {&#xA;        &#xA;            int ret2 = sws_scale(swsCtx2, frame2_nv12->data, frame2_nv12->linesize, 0, frame2_nv12->height, frame_yuv->data, frame_yuv->linesize);&#xA;            if (ret2 &lt; 0) {&#xA;                printf("sws_scale swsCtx2 failed\n");&#xA;            }&#xA;            I_LOG("frame_yuv:{},{}", frame_yuv->width, frame_yuv->height);&#xA;&#xA;        &#xA;            //I_LOG("frame_yuv:{}", frame_yuv->format);&#xA;&#xA;            if (frame_yuv->format == AV_PIX_FMT_YUV420P)&#xA;            {&#xA;&#xA;                for (int i = 0; i &lt; frame_yuv->height; i&#x2B;&#x2B;)&#xA;                {&#xA;                    fwrite(frame_yuv->data[0] &#x2B; frame_yuv->linesize[0] * i, 1, frame_yuv->width, fp_out);&#xA;                }&#xA;                for (int i = 0; i &lt; frame_yuv->height / 2; i&#x2B;&#x2B;)&#xA;                {&#xA;                    fwrite(frame_yuv->data[1] &#x2B; frame_yuv->linesize[1] * i, 1, frame_yuv->width / 2, fp_out);&#xA;                }&#xA;                for (int i = 0; i &lt; frame_yuv->height / 2; i&#x2B;&#x2B;)&#xA;                {&#xA;                    fwrite(frame_yuv->data[2] &#x2B; frame_yuv->linesize[2] * i, 1, frame_yuv->width / 2, fp_out);&#xA;                }&#xA;                printf("yuv to file\n");&#xA;            }&#xA;        }&#xA;&#xA;    }&#xA;&#xA;&#xA;    fclose(fp_in);&#xA;    fclose(fp_out);&#xA;    av_frame_free(&amp;frame_in);&#xA;    av_frame_free(&amp;frame_nv12);&#xA;    av_frame_free(&amp;frame_yuv);&#xA;    sws_freeContext(swsCtx);&#xA;    sws_freeContext(swsCtx2);&#xA;    sws_freeContext(swsCtx4);&#xA;&#xA;    //std::this_thread::sleep_for(std::chrono::milliseconds(8000));&#xA;&#xA;    return 0;&#xA;&#xA;}&#xA;&#xA;&#xA;&#xA;</iostream>

    &#xA;