Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (102)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (7687)

  • What is the correct way to stream custom packets using ffmpeg ?

    9 janvier 2019, par Lucker10

    I want to encode frames from camera using NvPipe and stream them via RTP using FFmpeg. My code produces the following error when I want to decode the stream :

    [h264 @ 0x7f3c6c007e80] decode_slice_header error
    [h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
    [h264 @ 0x7f3c6c007e80] decode_slice_header error
    [h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
    [h264 @ 0x7f3c6c007e80] decode_slice_header error
    [h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
    [h264 @ 0x7f3c6c007e80] decode_slice_header error
    [h264 @ 0x7f3c6c007e80] no frame!
    [h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced    0B f=0/0  
       Last message repeated 1 times

    On another PC, it is even not able to stream, and fails with an segmentation fault on av_interleaved_write_frame(..). How to initialize the AVPacket and its timebase correctly to successfully send and receive the stream using ffplay/VLC/ other software ?

    My code :

    avformat_network_init();
    // init encoder
    AVPacket *pkt = new AVPacket();
    int targetBitrate = 1000000;
    int targetFPS = 30;
    const uint32_t width = 640;
    const uint32_t height = 480;

    NvPipe* encoder = NvPipe_CreateEncoder(NVPIPE_BGRA32, NVPIPE_H264, NVPIPE_LOSSY, targetBitrate, targetFPS);

    // init stream output
    std::string str = "rtp://127.0.0.1:49990";
    AVStream* stream = nullptr;
    AVOutputFormat *output_format = av_guess_format("rtp", nullptr, nullptr);;
    AVFormatContext *output_format_ctx = avformat_alloc_context();

    avformat_alloc_output_context2(&output_format_ctx, output_format,   output_format->name, str.c_str());

    // open output url
    if (!(output_format->flags & AVFMT_NOFILE)){
        ret = avio_open(&output_format_ctx->pb, str.c_str(), AVIO_FLAG_WRITE);
    }

    output_format_ctx->oformat = output_format;
    output_format->video_codec = AV_CODEC_ID_H264;

    stream  = avformat_new_stream(output_format_ctx,nullptr);
    stream->id = 0;
    stream->codecpar->codec_id = AV_CODEC_ID_H264;
    stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
    stream->codecpar->width = width;
    stream->codecpar->height = height;
    stream->time_base.den = 1;
    stream->time_base.num = targetFPS; // 30fps

    /* Write the header */
    avformat_write_header(output_format_ctx, nullptr); // this seems to destroy the timebase of the stream

    std::vector rgba(width * height * 4);
    std::vector compressed(rgba.size());
    int frameCnt = 0;

    // encoding and streaming
    while (true)
    {
    frameCnt++;
    // Encoding
    // Construct dummy frame
    for (uint32_t y = 0; y < height; ++y)
       for (uint32_t x = 0; x < width; ++x)
           rgba[4 * (y * width + x) + 1] = (255.0f * x* y) / (width * height) * (y % 100 < 50);

    uint64_t size = NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(), compressed.size(), width, height, false); // last parameter needs to be true for keyframes

    av_init_packet(pkt);
    pkt->data = compressed.data();
    pkt->size = size;
    pkt->pts = frameCnt;

    if(!memcmp(compressed.data(), "\x00\x00\x00\x01\x67", 5)) {
       pkt->flags |= AV_PKT_FLAG_KEY;
    }

    //stream

    fflush(stdout);

    // Write the compressed frame into the output
    pkt->pts = av_rescale_q(frameCnt, AVRational {1, targetFPS}, stream->time_base);
    pkt->dts = pkt->pts;
    pkt->stream_index = stream->index;

    /* Write the data on the packet to the output format  */
    av_interleaved_write_frame(output_format_ctx, pkt);

    /* Reset the packet */
    av_packet_unref(pkt);
    }

    The .sdp file to open the stream with ffplay looks like this :

    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    c=IN IP4 127.0.0.1
    t=0 0
    a=tool:libavformat 58.18.101
    m=video 49990 RTP/AVP 96
    a=rtpmap:96 H264/90000
    a=fmtp:96 packetization-mode=1
  • Imageio unable to read webcam at correct framerate

    28 mai 2019, par Jason Nick Porter

    I’m trying to read frames from a webcam and analyze them in realtime, but since my function AnalyzeFrame() is faster than the framerate, it ends up pulling the same frame 1-4 times in a row, messing up my data. Here’s basically what I’m running.

    import imageio

    cam = imageio.get_reader('<video0>', fps=30)

    while not cam.closed:
       print(AnalyzeFrame(cam.get_next_data()))
    </video0>

    A few notes : My webcam should be able to handle 30fps, but I’m averaging 12-14 fps. I’ve timed each individual process and there’s very little regularity to the framerate. Some frames only get analyzed once, because they’re in the buffer for 20 or so milliseconds. Others get analyzed 4 times over a span of 100+ ms. Is there something in my code that’s causing this framerate problem ?

  • Output image with correct aspect with ffmpeg

    11 février, par koichirose

    I have a mkv video with the following properties (obtained with mediainfo) :

    &#xA;&#xA;

    Width                                    : 718 pixels&#xA;Height                                   : 432 pixels&#xA;Display aspect ratio                     : 2.35:1&#xA;Original display aspect ratio            : 2.35:1&#xA;

    &#xA;&#xA;

    I'd like to take screenshots of it at certain times :

    &#xA;&#xA;

    ffmpeg -ss 4212 -i filename.mkv -frames:v 1 -q:v 2 out.jpg&#xA;

    &#xA;&#xA;

    This will produce a 718x432 jpg image, but the aspect ratio is wrong (the image is "squeezed" horizontally). AFAIK, the output image should be 1015*432 (with width=height * DAR). Is this calculation correct ?

    &#xA;&#xA;

    Is there a way to have ffmpeg output images with the correct size/AR for all videos (i.e. no "hardcoded" values) ? I tried playing with the setdar/setsar filters without success.

    &#xA;&#xA;

    Also, out of curiosity, trying to obtain SAR and DAR with ffmpeg produces :

    &#xA;&#xA;

    Stream #0:0(eng): Video: h264 (High), yuv420p(tv, smpte170m/smpte170m/bt709, progressive),&#xA;718x432 [SAR 64:45 DAR 2872:1215], SAR 155:109 DAR 55645:23544, 24.99 fps, 24.99 tbr, 1k tbn, 49.98 tbc (default)&#xA;

    &#xA;&#xA;

    2872/1215 is 2.363, so a slightly different value than what mediainfo reported. Anyone knows why ?

    &#xA;