Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (97)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (9656)

  • How to covert video and extract frame from video using FFMPEG in single command

    13 juillet 2013, par kheya

    I am trying to convert and extract image frame from a video file in single command
    I can do this in 2 steps but I want to use pipe like technique to do this

    This is what I have :
    for %%a in ("*.avi") do ffmpeg -i "%%a" -c:v libx264 -preset slow -crf 20 -c:a libvo_aacenc -b:a 128k "%%~na.mp4" <— converts correctly

    I need to incorporate this extract command :

    ffmpeg -i inputfile.avi  -r  1  -t  4  image-%d.jpeg

    Merging two commands giving error.

    How do I do it ?

    EDIT :
    This is what I have. But it just converts the video, no jpg image created as second output

    for %%a in ("*.avi") do ffmpeg -i "%%a" -c:v libx264 -preset slow -crf 20 -c:a libvo_aacenc -b:a 128k "%%~na.mp4" | ffmpeg -r 1 -s 4cif "%%~na.jpeg"
  • ffmpeg encoding a video with time_base Not equal to framerate does not work in HML5 video players

    1er juillet 2019, par Gilgamesh22

    I have a time_base of 90000 with a frame rate of 30. I can generate a h264 video and have it work in VLC but this video does not work in the browser player. If I change the time_base to 30 It works fine.

    Note : I am changing the frame->pts appropriately to match the time_base.
    Note : Video does not have audio stream

    //header.h
    AVCodecContext *cctx;
    AVStream* stream;

    Here is the non working example code

    //source.cpp
    stream->time_base = { 1, 90000 };
    stream->r_frame_rate = { fps, 1 };
    stream->avg_frame_rate = { fps, 1 };

    cctx->codec_id = codecId;
    cctx->time_base = { 1 ,  90000 };
    cctx->framerate = { fps, 1 };

    // ......
    // add frame code later on timestamp are in millisecond
    frame->pts = (timestamp - startTimeStamp)* 90;

    Here is the working example code

    //source.cpp
    stream->time_base = { 1, fps};
    stream->r_frame_rate = { fps, 1 };
    stream->avg_frame_rate = { fps, 1 };

    cctx->codec_id = codecId;
    cctx->time_base = { 1 ,  fps};
    cctx->framerate = { fps, 1 };

    // ......
    //  add frame code timestamp are in millisecond
    frame->pts = (timestamp - startTimeStamp)/(1000/fps);

    Any ideas on why the second example works and the first does not in the HTML5 video player.

  • Create video file by mixing video and audio byte arrays FFmpeg & C++

    20 janvier 2021, par Sergey Zinovev

    I capture audio and video.

    &#xA;

    Video is captured by using Desktop Duplication API and, as a result, I get Textures 2D.&#xA;These Textures 2D are char arrays.

    &#xA;

    m_immediateContext->CopyResource(currTexture, m_acquiredDesktopImage.Get());&#xA;&#xA;D3D11_MAPPED_SUBRESOURCE* resource = new D3D11_MAPPED_SUBRESOURCE;&#xA;UINT subresource = D3D11CalcSubresource(0, 0, 0);&#xA;&#xA;m_immediateContext->Map(currTexture, subresource, D3D11_MAP_READ_WRITE, 0, resource);&#xA;&#xA;uchar * buffer = new uchar[(m_desc.Height * m_desc.Width * 4)];&#xA;const uchar * mappedData = static_cast<uchar>(resource->pData);&#xA;memcpy(buffer, mappedData, m_desc.Height * m_desc.Width * 4);&#xA;</uchar>

    &#xA;

    Then the Textures 2D convert in cv::Mat and write Video using OpenCV.

    &#xA;

    Audio captured by using WASAPI and, as a result, I get samples.

    &#xA;

    BYTE * buffer = new BYTE[(numFramesAvailable * pwfx->nBlockAlign)];&#xA;memcpy(buffer, pData, numFramesAvailable * pwfx->nBlockAlign);&#xA;

    &#xA;

    These samples are byte arrays then write in WAV file.

    &#xA;

    As a result, I get two files - video and audio, which merged by using FFmpeg.

    &#xA;

    I want to skip the creation of video and audio files and promptly create one file compose of two strims (video and audio) from raw data.

    &#xA;

    In order to make it I need help with FFmpeg code.&#xA;Specifically, in a way of creating and setting the correct output context and output streams, and how to encode raw data.

    &#xA;

    I've already learned doc/examples FFmpeg, but still can't make the working code. So, I really need your help guys.

    &#xA;