Recherche avancée

Médias (1)

Mot : - Tags -/géodiversité

Autres articles (72)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (8335)

  • How to improve Desktop capture performance and quality with ffmpeg [closed]

    6 novembre 2024, par Francesco Bramato

    I'm developing a game capture feature from my Electron app. I'm working on this since a while and tried a lot of different parameters combinations, now i'm running out of ideas :)

    


    I've read tons of ffmpeg documentation, SO posts, other sites, but i'm not really a ffmpeg expert or video editing pro.

    


    This is how it works now :

    


    The app spawn an ffmpeg command based on user's settings :

    


      

    • Output format (mp4, mkv, avi)
    • 


    • Framerate (12, 24, 30, 60)
    • 


    • Codec (X264, NVidia NVENC, AMD AMF)
    • 


    • Bitrate (from 1000 to 10000kpbs)
    • 


    • Presets (for X264)
    • 


    • Audio output (a dshow device like StereoMix or VB-Cable) and Audio input (a dshow device like the Microphone)
    • 


    • Final Resolution (720p, 1080p, 2K, Original Size)
    • 


    


    The command executed, as far, is :

    


    ffmpeg.exe -nostats -hide_banner -hwaccel cuda -hwaccel_output_format cuda -f gdigrab -draw_mouse 0 -framerate 60 -offset_x 0 -offset_y 0 -video_size 2560x1440 -i desktop -f dshow -i audio=@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{D61FA53D-FA37-4BE7-BE2F-4005F94790BB} -ar 44100 -colorspace bt709 -color_trc bt709 -color_primaries bt709 -c:v h264_nvenc -b:v 6000k -preset slow -rc cbr -profile:v high -g 60 -acodec aac -maxrate 6000k -bufsize 12000k -pix_fmt yuv420p -f mpegts -


    


    one of the settings is the recording mode : full game session or replay buffer.
In case of full game session, the output is a file, for replay buffer is stdout.

    


    The output format is mpegts because, as far i have read in a lot of places, the video stream can be cut in any moment.

    


    Replays are cutted with different past and future duration based on game events.

    


    In full game session, the replays are cutted directly from the mpegts.

    


    In replay buffer mode, the ffmpeg stdout is redirect to the app that record the buffer (1 or 2 minutes), when the replay must be created, the app saves on the disk the buffer section according to past and future duration and with another ffmpeg command, copy it to a mp4 or mkv final file.

    


    Generally speaking, this works reliably.

    


    There are few issues :

    


      

    • nonetheless i ask ffmpeg to capture at 60fps, the final result is at 30fps (using -r 60 will speed up the final result)
    • 


    • some user has reported FPS drops in-game, specially when using NVidia NVENC (and having a NVIDIA GPU), using X264 seems save some FPS
    • 


    • colors are strange compared to original, what i see on screen, they seem washed out - i could have solved this using -colorspace bt709 -color_trc bt709 -color_primaries bt709 but don't know if is the right choice
    • 


    • NVIDIA NVenc with any other preset that is not slow creates videos terribly laggy
    • 


    


    here two examples, 60 FPS, NVIDIA NVENC (slow, 6000kbs, MP4

    


    Recorded by my app : https://www.youtube.com/watch?v=Msm62IwHdlk

    


    Recorded by OB with nearly same settings : https://youtu.be/WuHoLh26W7E

    


    Hope someone can help me

    


    Thanks !

    


  • Windows Pipes STDIN and STDOUT Parent Child proc communication IPC FFMPEG

    15 octobre 2018, par Evren Bingøl

    I am writing a simple WINDOWS app which demonstrates piping,

    I pass byte size data down to child proc, increment the value and send the char size data back to parent and loop until it reaches MAX_CHAR
    Pretty much demonstration of "i++" with IPC.

    Parent Process

    while(i<256){
       bSuccess = WriteFile(g_hChildStd_IN_Wr, chBuf, sizeof(char), &dwWritten, NULL);
       bSuccess = ReadFile(g_hChildStd_OUT_Rd, chBuf, sizeof(char), &dwRead, NULL); // IF THERE IS NO FFLUSH IT BLOCKS
    }

    And in Child

    while (i<256){
           byte data=0;
           fread(&data, sizeof(char), 1, stdout);
           data++;
           fwrite(&data, sizeof(char), 1, stdout);
           //fflush(stdout); IF I DO NOT HAVE THIS  PARENT BLOCKS ON READ
    }

    First of all if I do not FFLUSH child proc stdout, the parent blocks on reading child’s stdout.

    How can one run this code without having to fflush child’s stdout.

    Closing the pipe after child’s first write is not an option as it is in a loop and needs to execute 256 times.

    more generically I want the child to write N bytes to parent, parent read that N bytes do something and write back to child another N bytes and child does something with that N bytes and write to parent N bytes. This happens M times.

    thing is I can not use fflush because my final goal is to use a child process that is not implemented by me.

    My final goal is to pipe data to FFMPEG encode the data and read back from the stdin and do this over and over again with out having to fork a new FFMPEG process for each image frame but rather fork one instance of FFMPEG and pipe data in and read data out from it. And since I did not implement ffmpeg and I can not change the source code.

    thanks

    Thanks

  • Anomalie #4187 (Fermé) : Connecté en Anglais (langue principale français) pas de bouton pour ajout...

    8 février 2021, par cedric -

    Et je ferme ici car le bug ne concerne pas SVP mais extraire_multi, et donc on traite le problème dans

    #4189