Recherche avancée

Médias (2)

Mot : - Tags -/kml

Autres articles (29)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (5611)

  • Catch refreshing Terminal output in Python

    26 janvier 2019, par Aaroknight

    I have written a python script which converts movies with ffmpeg from anything to h265 (hevc). Works fine so far now and I get catch a terminal output (How can I get terminal output in python ? - Stackoverflow). I already tried this solution as well : Catching Terminal Output in Python - Stackoverflow But none of them is really what I need.

    Current code is following :

    def convert(path):
    if os.path.getsize(path) < 500000000:
       pass
    name = path.split("/")[-1]
    os.mkdir(path.replace(name, "hevc/"))
    outvid = path.replace(name, "hevc/" + name)
    cmd = ["ffmpeg", "-hwaccel", "cuvid", "-i", path, "-c:v", "hevc_nvenc", "-preset",
          "slow", "-rc", "vbr_hq", "-max_muxing_queue_size", "1000", "-map", "0", "-map_metadata",
          "0", "-map_chapters", "0", "-c:a", "copy", "-c:s", "copy", outvid]

    process = subprocess.Popen(cmd, stdout=subprocess.PIPE).communicate()[0]
    print(process)

    While ffmpeg is converting something, the bottom terminal line usually actualizes itself every second showing fps, time, etc. See screenshot bottom line.

    Normal ffmpeg output

    In Python I just get a static output :

    Python IDE output

    So do you guys have any idea how to catch that refreshing output ?

  • Create video file by mixing video and audio byte arrays FFmpeg & C++

    20 janvier 2021, par Sergey Zinovev

    I capture audio and video.

    


    Video is captured by using Desktop Duplication API and, as a result, I get Textures 2D.
These Textures 2D are char arrays.

    


    m_immediateContext->CopyResource(currTexture, m_acquiredDesktopImage.Get());&#xA;&#xA;D3D11_MAPPED_SUBRESOURCE* resource = new D3D11_MAPPED_SUBRESOURCE;&#xA;UINT subresource = D3D11CalcSubresource(0, 0, 0);&#xA;&#xA;m_immediateContext->Map(currTexture, subresource, D3D11_MAP_READ_WRITE, 0, resource);&#xA;&#xA;uchar * buffer = new uchar[(m_desc.Height * m_desc.Width * 4)];&#xA;const uchar * mappedData = static_cast<uchar>(resource->pData);&#xA;memcpy(buffer, mappedData, m_desc.Height * m_desc.Width * 4);&#xA;</uchar>

    &#xA;

    Then the Textures 2D convert in cv::Mat and write Video using OpenCV.

    &#xA;

    Audio captured by using WASAPI and, as a result, I get samples.

    &#xA;

    BYTE * buffer = new BYTE[(numFramesAvailable * pwfx->nBlockAlign)];&#xA;memcpy(buffer, pData, numFramesAvailable * pwfx->nBlockAlign);&#xA;

    &#xA;

    These samples are byte arrays then write in WAV file.

    &#xA;

    As a result, I get two files - video and audio, which merged by using FFmpeg.

    &#xA;

    I want to skip the creation of video and audio files and promptly create one file compose of two strims (video and audio) from raw data.

    &#xA;

    In order to make it I need help with FFmpeg code.&#xA;Specifically, in a way of creating and setting the correct output context and output streams, and how to encode raw data.

    &#xA;

    I've already learned doc/examples FFmpeg, but still can't make the working code. So, I really need your help guys.

    &#xA;

  • How to synchronize audio and video using FFmpeg from 2 different input and stream it over the network via RTP, in c++ ?

    5 novembre 2018, par ElPablo

    I am currently trying to develop an app in C++ that perform all of this :

    • Capture Video of the desktop
    • Capture Audio of the desktop
    • Video & Audio processing
    • Stream Audio & Video to another computer

    For this I am using OpenCV and FFmpeg libraries.

    I succeed to capture the video, with openCV, convert it in an AVFrame, encoding the frame and send it over the network with FFmpeg.

    For the audio, I also succeed (with the help of the FFmpeg documentation, transcode_aac.c) to capture the audio of my audio card, decoding, convert, encoding and send it over the network.

    Then I go to my other computer, and I read the 2 Streams with FFplay :

    ffplay -loglevel repeat+level+verbose -probesize 32 -sync ext -i config.sdp -protocol_whitelist file,udp,rtp

    It works, I have the video and the audio .. but .. The sound is not at all synchronize with the video, it is like about 3 sec later.

    My code is like this :

    I am using, 3 AVFormatContext :

    • audio input
    • video output
    • audio output

    I did that because RTP can only take one stream, so I had to separate Audio and Video.

    So basically, I have 2 input and I need 2 output.

    I know how to do that in command line with FFmpeg (and it works it is synchronize) but I have no idea how to do that and synchronize the streams in C++.

    My guesses are :

    • I have to play with time_base attribute of the packets during
      encoding => but how can I synchronize packet from two different
      AVStream and AVFormatContext ?
    • Do I have to set the time_base attribute of the output audio with the
      input audio or with my 30 FPS that I want ? Same question with output
      Video

    Further information :

    • The video is captured using this
      OPENCV Desktop Capture
      then convert with this function sws_scale() into an AVFrame

    • I am using 4 Thread (Video Capture, Video processing, Audio Decoding,
      Audio processing)

    So guys, if you have any ideas how to synchronize audio and video, or other tips that can help me, it will be with pleasure.

    Thx