Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (70)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (12982)

  • SDL2 won't play with more than 6 audio channels

    13 juin 2020, par Hiko Haieto

    I am trying to stream (raw) video and audio from a capture device as part of my home media setup (with my pc acting similarly to a receiver in a typical home theatre setup), but the biggest problem I haven't been able to get past is that I haven't been able to get ffplay (using SDL2 as its audio backend) to work with all 8 channels in 7.1 streams - two simply get dropped, despite it recognising 8 channel input or me specifying a 7.1 layout.

    



    I have been able to confirm that all 8 channels are present in the source by first using ffmpeg to save the output of a speaker test to a file and playing that back with both mplayer (which works) and ffplay (which doesn't). I also wrote some minimal code to play the audio directly through SDL's API with the same result, so it's not the fault of ffplay. I might simply use mplayer if it weren't for the fact that piping output from ffmpeg adds too much latency for real-time use. I am using libSDL version 2.0.12 and ffplay 4.2.3, both of which are the latest at the time of writing and are ostensibly supposed to support 7.1 audio.

    



    Using output recorded from speaker-test -c 8, I am using the following to play it back in mplayer :

    



    mplayer -channels 8 -rawaudio channels=8 -format s16le -demuxer rawaudio speaker-test.pcm


    



    and the following to play it back in ffplay :

    



    ffplay -f s16le -ac 8 -af 'channelmap=channel_layout=7.1' speaker-test.pcm


    



    No matter what I try, the two side channels get dropped. I couldn't figure out how to play raw pcm in SDL, so I repeated the same tests with wav output and used the following code to play it back :

    



    #include <sdl2></sdl2>SDL.h>&#xA;&#xA;int main(int argc, char **argv) {&#xA;    SDL_Init(SDL_INIT_AUDIO);&#xA;    SDL_AudioSpec wavSpec;&#xA;    Uint32 wavLength;&#xA;    Uint8 *wavBuffer;&#xA;    SDL_LoadWAV("speaker-test.wav", &amp;wavSpec, &amp;wavBuffer, &amp;wavLength);&#xA;    SDL_AudioDeviceID deviceID = SDL_OpenAudioDevice(NULL, 0, &amp;wavSpec, NULL, 0);&#xA;    SDL_QueueAudio(deviceID, wavBuffer, wavLength);&#xA;    SDL_PauseAudioDevice(deviceID, 0);&#xA;    SDL_Delay(30000);&#xA;    SDL_CloseAudioDevice(deviceID);&#xA;    SDL_FreeWAV(wavBuffer);&#xA;    SDL_Quit();&#xA;    return 0;&#xA;}&#xA;

    &#xA;&#xA;

    The above code exhibits the same behaviour of dropping the two additional side channels, despite it being the latest version of SDL that should have supported 7.1 for many releases now. Why might this be happening, and how might I fix it ?

    &#xA;

  • Transcoding WAV audio to AAC in an MP4 container using FFmpeg C API

    7 novembre 2022, par vstrom coder

    I'm trying to compose source video and audio into a final MP4 video.&#xA;I have a problem with the WAV audio. After decoding and filtering, I'm getting an error from the output encoder : [aac @ 0x145e04c40] more samples than frame size

    &#xA;

    I initially used the following filter graph (minimal reproducible example) :

    &#xA;

        abuffer -> aformat -> abuffersink&#xA;

    &#xA;

    At this point I was getting the error mentioned above.

    &#xA;

    Then, I tried to insert a aresample filter to the graph :

    &#xA;

        abuffer -> aresample -> aformat -> abuffersink&#xA;

    &#xA;

    But still getting the same error.&#xA;This was based on the fact that the ffmpeg CLI uses this filter when converting WAV to MP4 :

    &#xA;

    Command :

    &#xA;

        ffmpeg -i source.wav output.mp4 -loglevel debug&#xA;

    &#xA;

    Output contains :

    &#xA;

        [graph_0_in_0_0 @ 0x138f06200] Setting &#x27;time_base&#x27; to value &#x27;1/44100&#x27;&#xA;    [graph_0_in_0_0 @ 0x138f06200] Setting &#x27;sample_rate&#x27; to value &#x27;44100&#x27;&#xA;    [graph_0_in_0_0 @ 0x138f06200] Setting &#x27;sample_fmt&#x27; to value &#x27;s16&#x27;&#xA;    [graph_0_in_0_0 @ 0x138f06200] Setting &#x27;channel_layout&#x27; to value &#x27;mono&#x27;&#xA;    [graph_0_in_0_0 @ 0x138f06200] tb:1/44100 samplefmt:s16 samplerate:44100 chlayout:mono&#xA;    [format_out_0_0 @ 0x138f06620] Setting &#x27;sample_fmts&#x27; to value &#x27;fltp&#x27;&#xA;    [format_out_0_0 @ 0x138f06620] Setting &#x27;sample_rates&#x27; to value &#x27;96000|88200|64000|48000|44100|32000|24000|22050|16000|12000|11025|8000|7350&#x27;&#xA;    [format_out_0_0 @ 0x138f06620] auto-inserting filter &#x27;auto_aresample_0&#x27; between the filter &#x27;Parsed_anull_0&#x27; and the filter &#x27;format_out_0_0&#x27;&#xA;    [AVFilterGraph @ 0x138f060f0] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed&#xA;    [auto_aresample_0 @ 0x138f06c30] [SWR @ 0x120098000] Using s16p internally between filters&#xA;    [auto_aresample_0 @ 0x138f06c30] ch:1 chl:mono fmt:s16 r:44100Hz -> ch:1 chl:mono fmt:fltp r:44100Hz&#xA;    Output #0, mp4, to &#x27;output.mp4&#x27;:&#xA;      Metadata:&#xA;        encoder         : Lavf59.27.100&#xA;      Stream #0:0, 0, 1/44100: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, delay 1024, 69 kb/s&#xA;        Metadata:&#xA;          encoder         : Lavc59.37.100 aac&#xA;

    &#xA;

    I'm trying to figure out whether I should use the SWR library directly as exemplified in the transcode_aac example.

    &#xA;

  • Efficient real-time video stream processing and forwarding with RTMP servers

    19 mai 2023, par dumbQuestions

    I have a scenario where I need to retrieve a video stream from an RTMP server, apply image processing (specifically, adding blur to frames), and then forward the processed stream to another RTMP server (in this case, Twitch).

    &#xA;

    Currently, I'm using ffmpeg in conjunction with cv2 to retrieve and process the stream. However, this approach introduces significant lag when applying the blur. I'm seeking an alternative method that can achieve the desired result more efficiently. I did attempt to solely rely on ffmpeg for the entire process, but I couldn't find a way to selectively process frames based on a given condition and subsequently transmit only those processed frames.

    &#xA;

    Is there a more efficient approach or alternative solution that can address this issue and enable real-time video stream processing with minimal lag ?

    &#xA;

    Thanks in advance !

    &#xA;

    def forward_stream(server_url, stream_key, twitch_stream_key):&#xA;    get_ffmpeg_command = [...]&#xA;&#xA;    send_ffmpeg_command [...]&#xA;&#xA;    # Start get FFmpeg process&#xA;    read_process = subprocess.Popen(get_ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)&#xA;&#xA;    # Start send FFmpeg process&#xA;    send_process = send_process = subprocess.Popen(send_ffmpeg_command, stdin=subprocess.PIPE, stderr=subprocess.DEVNULL)&#xA;&#xA;    # Open video capture&#xA;    cap = cv2.VideoCapture(f&#x27;{server_url}&#x27;)&#xA;&#xA;    while True:&#xA;        # Read the frame&#xA;        ret, frame = cap.read()&#xA;        if ret:&#xA;            # Apply machine learning algorithm&#xA;            should_blur = machine_learning_algorithm(frame)&#xA;&#xA;            # Apply blur if necessary&#xA;            if machine_learning_algorithm(frame):&#xA;                frame = cv2.blur(frame, (25, 25))&#xA;&#xA;            # Write the frame to FFmpeg process&#xA;            send_process.stdin.write(frame.tobytes())&#xA;        else:&#xA;            break&#xA;&#xA;    # Release resources&#xA;    cap.release()&#xA;    read_process.stdin.close()&#xA;    read_process.wait()&#xA;&#xA;

    &#xA;