Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (15)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (5923)

  • ffmpeg save mp3 file from available wss stream

    11 juillet 2021, par phoenixAZ

    In a hello world node.js app I am succeeding in getting a feed from twilio conference and sending to the google speech to text. Concurrently I want to control recording to mp3 of the available audio stream (programmatically call start and stop). The was is subscribed to audio stream but I don't know how to attach ffmpeg to the local stream. I have tried :

    


                // ffmpeg('rtsp://host:port/path/to/stream')
            //experimenting telling it to use the local stream
            //
            //ffmpeg(wss.addListener)  //invlaid input error
            //ffmpeg(wss.stream) //thsi hits the console error below
            ffmpeg(wss.stream)
            .noVideo()
            .audioChannels(1)
            .audioBitrate(128)
            .duration('1:00')
            .on('end', function () { console.log('saved mp3'); })
            .on('error', function (err) { console.log('error mp3'); })
            .save('/path/to/output.mp3');


    


    Any suggestions are welcomed. I am in a node.js project

    


  • Split ffmpeg audio and video to different pipe outputs

    8 septembre 2021, par Bryan Horna

    I have a conference application using Jitsi Meet, and using their Jibri tool for streaming to an RTMP source.
    
But the delay is intolerable as of now so while looking out for alternatives I found that I can use WebRTC.
    
It turns out that most of the tools I've found (Pion, and others) expect me to send separate video/audio streams.
    
And Jibri uses an ffmpeg command which outputs both streams joined, so I want to split them.

    


    Here's the command in question :

    


    ffmpeg \
-y -v info \
-f x11grab \
-draw_mouse 0 \
-r 30 \
-s 1920x1080 \
-thread_queue_size 4096 \
-i :0.0+0,0 \
-f alsa \
-thread_queue_size 4096 \
-i plug:bsnoop \
-acodec aac -strict -2 -ar 44100 -b:a 128k \
-af aresample=async=1 \
-c:v libx264 -preset veryfast \
-maxrate 2976k -bufsize 5952k \
-pix_fmt yuv420p -r 30 \
-crf 25 \
-g 60 -tune zerolatency \
-f flv rtmps://redacted.info


    


    I'd want to have it output to at least two outputs (one for video and other for audio), using pipe:x or rtp://.
    
It'll be better if multiple output codecs are possible (audio opus/aac, video h264/vp8), so it supports most of WebRTC.

    


    I've read the documentation at ffmpeg website but still can't get to a command that does this job in just one command.
    
I know I could use -an and -vn with two different ffmpeg commands but that would consume more CPU I guess.

    


    Thanks in advance for your great help !

    


  • NumPy array of a video changes from the original after writing into the same video

    29 mars 2021, par Rashiq

    I have a video (test.mkv) that I have converted into a 4D NumPy array - (frame, height, width, color_channel). I have even managed to convert that array back into the same video (test_2.mkv) without altering anything. However, after reading this new, test_2.mkv, back into a new NumPy array, the array of the first video is different from the second video's array i.e. their hashes don't match and the numpy.array_equal() function returns false. I have tried using both python-ffmpeg and scikit-video but cannot get the arrays to match.

    


    Python-ffmpeg attempt :

    


    import ffmpeg
import numpy as np
import hashlib

file_name = 'test.mkv'

# Get video dimensions and framerate
probe = ffmpeg.probe(file_name)
video_stream = next((stream for stream in probe['streams'] if stream['codec_type'] == 'video'), None)
width = int(video_stream['width'])
height = int(video_stream['height'])
frame_rate = video_stream['avg_frame_rate']

# Read video into buffer
out, error = (
    ffmpeg
        .input(file_name, threads=120)
        .output("pipe:", format='rawvideo', pix_fmt='rgb24')
        .run(capture_stdout=True)
)

# Convert video buffer to array
video = (
    np
        .frombuffer(out, np.uint8)
        .reshape([-1, height, width, 3])
)

# Convert array to buffer
video_buffer = (
    np.ndarray
        .flatten(video)
        .tobytes()
)

# Write buffer back into a video
process = (
    ffmpeg
        .input('pipe:', format='rawvideo', s='{}x{}'.format(width, height))
        .output("test_2.mkv", r=frame_rate)
        .overwrite_output()
        .run_async(pipe_stdin=True)
)
process.communicate(input=video_buffer)

# Read the newly written video
out_2, error = (
    ffmpeg
        .input("test_2.mkv", threads=40)
        .output("pipe:", format='rawvideo', pix_fmt='rgb24')
        .run(capture_stdout=True)
)

# Convert new video into array
video_2 = (
    np
        .frombuffer(out_2, np.uint8)
        .reshape([-1, height, width, 3])
)

# Video dimesions change
print(f'{video.shape} vs {video_2.shape}') # (844, 1080, 608, 3) vs (2025, 1080, 608, 3)
print(f'{np.array_equal(video, video_2)}') # False

# Hashes don't match
print(hashlib.sha256(bytes(video_2)).digest()) # b'\x88\x00\xc8\x0ed\x84!\x01\x9e\x08 \xd0U\x9a(\x02\x0b-\xeeA\xecU\xf7\xad0xa\x9e\\\xbck\xc3'
print(hashlib.sha256(bytes(video)).digest()) # b'\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7'


    


    Scikit-video attempt :

    


    import skvideo.io as sk
import numpy as np

video_data = sk.vread('test.mkv')

sk.vwrite('test_2_ski.mkv', video_data)

video_data_2 = sk.vread('test_2_ski.mkv')

# Dimensions match but...
print(video_data.shape) # (844, 1080, 608, 3)
print(video_data_2.shape) # (844, 1080, 608, 3)

# ...array elements don't
print(np.array_equal(video_data, video_data_2)) # False

# Hashes don't match either
print(hashlib.sha256(bytes(video_2)).digest()) # b'\x8b?]\x8epD:\xd9B\x14\xc7\xba\xect\x15G\xfaRP\xde\xad&EC\x15\xc3\x07\n{a[\x80'
print(hashlib.sha256(bytes(video)).digest()) # b'\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7'


    


    I don't understand where I'm going wrong and both the respective documentations do not highlight how to do this particular task. Any help is appreciated. Thank you.