Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (111)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (5317)

  • How to Synchronize Audio with Video Frames [Python]

    19 septembre 2023, par Ростислав

    I want to stream video from URL to a server via socket, which then restreams it to all clients in the room.

    


    This code streams video frame by frame :

    


    async def stream_video(room, url):
    cap = cv2.VideoCapture(url)
    fps = round(cap.get(cv2.CAP_PROP_FPS))

    while True:
        ret, frame = cap.read()
        if not ret: break
        _, img_bytes = cv2.imencode(".jpg", frame)
        img_base64 = base64.b64encode(img_bytes).decode('utf-8')
        img_data_url = f"data:image/jpeg;base64,{img_base64}"

        await socket.emit('segment', { 'room': room, 'type': 'video', 'stream': img_data_url})
        await asyncio.sleep(1/fps)
    
    cap.release()


    


    And this is code for stream audio :

    


    async def stream_audio(room, url):
    sample_size = 14000
    cmd_audio = [
        "ffmpeg",
        "-i", url,
        '-vn',
        '-f', 's16le',
        '-c:a', 'pcm_s16le',
        "-ac", "2",
        "-sample_rate","48000",
        '-ar','48000',
        "-acodec","libmp3lame",
        "pipe:1"
    ]
    proc_audio = await asyncio.create_subprocess_exec(
        *cmd_audio, stdout=subprocess.PIPE, stderr=False
    )

    while True:
        audio_data = await proc_audio.stdout.read(sample_size)
        if audio_data:
            await socket.emit('segment', { 'room': room, 'type': 'audio', 'stream': audio_data})
        await asyncio.sleep(1)



    


    But the problem is : how to synchronize them ? How many bytes need to be read every second from ffmpeg so that the audio matches the frames.

    


    I tried to do this, but the problem with the number of chunks still remained :

    


    while True:
    audio_data = await proc_audio.stdout.read(sample_size)
    if audio_data:
        await socket.emit('segment', { 'room': room, 'type': 'audio', 'stream': audio_data})

        for i in range(fps):
            ret, frame = cap.read()
            if not ret: break
            _, img_bytes = cv2.imencode(".jpg", frame)
            img_base64 = base64.b64encode(img_bytes).decode('utf-8')
            img_data_url = f"data:image/jpeg;base64,{img_base64}"

            await socket.emit('segment', { 'room': room, 'type': 'video', 'stream': img_data_url})
            await asyncio.sleep(1/fps)


    


    I also tried loading a chunk of audio into pydub, but it shows that the duration of my 14000 chunk is 0.07s, which is very small. And if you increase the number of chunks for reading to 192k (as the gpt chat says), then the audio will simply play very, very quickly.
The ideal number of chunks that I was able to achieve is approximately 14000, but the audio is still not synchronous.

    


  • can someone explain this difference between H.264 and H.265 ?

    19 janvier 2017, par Muhammad Abu bakr

    I have studied this in a research paper :

    "The off-the-shelf video codecs like H.264 handle all the movements equally. In our case there are some non moving region that lies in region of interest and need to be encode in high quality and there are some moving regions which don’t need such requirements. H.265 can help us in such circumstances."

    How H.265 deals with movements differently ?

  • FFMPEG : How to mix different types of image sequence inputs when creating multiple streams

    9 août 2021, par chutlies

    I am using a piece of software that generates a .txt files with rendered images. The list within that txt files is printed with the word 'file' at the beginning of the file path.

    


    Ex : file 'file:C :/Users/User/Desktop/test/Test.0001.png'

    


    I am attempting to input another image sequence as an overlay. Everything works fine if I just overlay, scale, and render. So this works fine :

    


    ffmpeg -hide_banner -y -f concat -safe 0 -i C:/test/input.txt -i "C:/test/BALL.%04d.png" -filter_complex "overlay[4K_in];[4K_in]scale=1920:1080[hdOut]" -map [hdOut] hd.mp4


    


    But when I start to split the stream to create different outputs it will only render the overlayed stream [1:v] and not the composited image.

    


    ffmpeg -hide_banner -y -f concat -safe 0 -i C:/test/input.txt -i "C:/test/BALL.%04d.png" -filter_complex "overlay,split=3[mp4HD_in][mxf720_in][mov4K_in];[mp4HD_in]scale=1920:1080[mp4HD_out];[mxf720_in]scale=1280:720[mxf720_out];[mov4K_in]scale=3840:2160[mov4K_out]" -crf 16 -vcodec libx264 -map [mp4HD_out] C:/test/hdMP4.mp4 -vcodec prores -map [mov4K_out] C:/test/MOV4K.mov -vcodec dnxhd -pix_fmt yuv422p -b:v 75M -map [mxf720_out] C:test/MXF720.mxf


    


    If I remove 'file' from the frames file path in the .txt file it works.

    


    Ex : file 'C :/Users/User/Desktop/test/Test.0001.png'

    


    Unfortunately, I am unable to change this as it's being generated and run within a piece of software. Are there any flags or something that I need to add to get around this ? Any other possible techniques beyond starting another &&ffmpeg call to generate the streams over the overlay ?

    


    I do get this in the logs :&#xA;[concat @ 0000024a0eacf280] DTS -230584300921369 < 0 out of order&#xA;DTS -230584300921369, next:40000 st:0 invalid dropping&#xA;PTS -230584300921369, next:40000 invalid dropping st:0&#xA;DTS -230584300921369, next:40000 st:0 invalid dropping&#xA;PTS -230584300921369, next:40000 invalid dropping st:0&#xA;DTS -230584300921369, next:40000 st:0 invalid dropping&#xA;PTS -230584300921369, next:40000 invalid dropping st:0&#xA;DTS -230584300921369, next:40000 st:0 invalid dropping&#xA;PTS -230584300921369, next:40000 invalid dropping st:0&#xA;[image2 @ 0000024a0eadd140] Thread message queue blocking ; consider raising the thread_queue_size option (current value : 8)

    &#xA;