Recherche avancée

Médias (91)

Autres articles (87)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (11955)

  • How To Write An Oscilloscope

    29 avril 2012, par Multimedia Mike — General, gme, oscilloscope, visualization

    I’m trying to figure out how to write a software oscilloscope audio visualization. It’s made more frustrating by the knowledge that I am certain that I have accomplished this task before.

    In this context, the oscilloscope is used to draw the time-domain samples of an audio wave form. I have written such a plugin as part of the xine project. However, for that project, I didn’t have to write the full playback pipeline— my plugin was just handed some PCM data and drew some graphical data in response. Now I’m trying to write the entire engine in a standalone program and I’m wondering how to get it just right.



    This is an SDL-based oscilloscope visualizer and audio player for Game Music Emu library. My approach is to have an audio buffer that holds a second of audio (44100 stereo 16-bit samples). The player updates the visualization at 30 frames per second. The o-scope is 512 pixels wide. So, at every 1/30th second interval, the player dips into the audio buffer at position ((frame_number % 30) * 44100 / 30) and takes the first 512 stereo frames for plotting on the graph.

    It seems to be working okay, I guess. The only problem is that the A/V sync seems to be slightly misaligned. I am just wondering if this is the correct approach. Perhaps the player should be performing some slightly more complicated calculation over those (44100/30) audio frames during each update in order to obtain a more accurate graph ? I described my process to an electrical engineer friend of mine and he insisted that I needed to apply something called hysteresis to the output or I would never get accurate A/V sync in this scenario.

    Further, I know that some schools of thought on these matters require that the dots in those graphs be connected, that the scattered points simply won’t do. I guess it’s a stylistic choice.

    Still, I think I have a reasonable, workable approach here. I might just be starting the visualization 1/30th of a second too late.

  • How to improve Desktop capture performance and quality with ffmpeg [closed]

    6 novembre 2024, par Francesco Bramato

    I'm developing a game capture feature from my Electron app. I'm working on this since a while and tried a lot of different parameters combinations, now i'm running out of ideas :)

    


    I've read tons of ffmpeg documentation, SO posts, other sites, but i'm not really a ffmpeg expert or video editing pro.

    


    This is how it works now :

    


    The app spawn an ffmpeg command based on user's settings :

    


      

    • Output format (mp4, mkv, avi)
    • 


    • Framerate (12, 24, 30, 60)
    • 


    • Codec (X264, NVidia NVENC, AMD AMF)
    • 


    • Bitrate (from 1000 to 10000kpbs)
    • 


    • Presets (for X264)
    • 


    • Audio output (a dshow device like StereoMix or VB-Cable) and Audio input (a dshow device like the Microphone)
    • 


    • Final Resolution (720p, 1080p, 2K, Original Size)
    • 


    


    The command executed, as far, is :

    


    ffmpeg.exe -nostats -hide_banner -hwaccel cuda -hwaccel_output_format cuda -f gdigrab -draw_mouse 0 -framerate 60 -offset_x 0 -offset_y 0 -video_size 2560x1440 -i desktop -f dshow -i audio=@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{D61FA53D-FA37-4BE7-BE2F-4005F94790BB} -ar 44100 -colorspace bt709 -color_trc bt709 -color_primaries bt709 -c:v h264_nvenc -b:v 6000k -preset slow -rc cbr -profile:v high -g 60 -acodec aac -maxrate 6000k -bufsize 12000k -pix_fmt yuv420p -f mpegts -


    


    one of the settings is the recording mode : full game session or replay buffer.
In case of full game session, the output is a file, for replay buffer is stdout.

    


    The output format is mpegts because, as far i have read in a lot of places, the video stream can be cut in any moment.

    


    Replays are cutted with different past and future duration based on game events.

    


    In full game session, the replays are cutted directly from the mpegts.

    


    In replay buffer mode, the ffmpeg stdout is redirect to the app that record the buffer (1 or 2 minutes), when the replay must be created, the app saves on the disk the buffer section according to past and future duration and with another ffmpeg command, copy it to a mp4 or mkv final file.

    


    Generally speaking, this works reliably.

    


    There are few issues :

    


      

    • nonetheless i ask ffmpeg to capture at 60fps, the final result is at 30fps (using -r 60 will speed up the final result)
    • 


    • some user has reported FPS drops in-game, specially when using NVidia NVENC (and having a NVIDIA GPU), using X264 seems save some FPS
    • 


    • colors are strange compared to original, what i see on screen, they seem washed out - i could have solved this using -colorspace bt709 -color_trc bt709 -color_primaries bt709 but don't know if is the right choice
    • 


    • NVIDIA NVenc with any other preset that is not slow creates videos terribly laggy
    • 


    


    here two examples, 60 FPS, NVIDIA NVENC (slow, 6000kbs, MP4

    


    Recorded by my app : https://www.youtube.com/watch?v=Msm62IwHdlk

    


    Recorded by OB with nearly same settings : https://youtu.be/WuHoLh26W7E

    


    Hope someone can help me

    


    Thanks !

    


  • FFmpeg RTSP Recording : Video Timestamp Does not Match With Recorded MP4 File Timestamp [closed]

    26 avril 2024, par lastpeony4

    I'm currently testing by streaming a 30 fps example flv video using a local Happy-Time RTSP server.

    


    This is the flv file i am streaming with RTSP :

    


    enter image description here

    


    I recorded the video with below ffmpeg command :

    


    ffmpeg -i rtsp://127.0.0.1:6555/test30fps.flv -c copy test30fps.mp4


    


    The resulting video appears visually satisfactory, yet there's a discrepancy between the displayed time on the video and the actual duration of the video file. Although the MP4 file duration is correct (endRecordingTimeMs-startRecordingTimeMs= mp4 file duration), the time displayed within the video does not synchronize precisely with the file's time. Notably, this disparity escalates as the video progresses.

    


    I anticipate the time text overlaid on the video and the file's time to align seamlessly. However, a few seconds of divergence are noticeable, gradually expanding over the video's duration.

    


    enter image description here

    


    Why does this occur and is there any way to fix this ?