Recherche avancée

Médias (0)

Mot : - Tags -/api

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (99)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (7077)

  • avr32 : remove explicit support

    9 juin 2024, par Rémi Denis-Courmont
    avr32 : remove explicit support
    

    The vendor has long since switched to Arm, with the last product
    reaching their official end-of-life over 11 years ago. Linux support for
    the ISA was dropped 7 years ago. More importantly, this architecture was
    never supported by upstream GCC, and the vendor fork is stuck at version
    4.2, which FFmpeg no longer supports (as per C11 requirement).

    Presumably, this is still the case given the lack of vendor support.
    Indeed all of the code being removed here consisted of inline assembler
    scalar optimisations. A sane C compiler should be able to perform those
    automatically nowadays (with the sole exception of fast CLZ detection),
    but this is moot as this architecture is evidently dead.

    • [DH] configure
    • [DH] libavcodec/avr32/mathops.h
    • [DH] libavcodec/mathops.h
    • [DH] libavutil/avr32/intreadwrite.h
    • [DH] libavutil/intreadwrite.h
  • FFMPEG MKV -> MP4 Batch Conversion

    15 juillet 2024, par blaziken386

    I'm trying to write a program that lets me convert a series of .mkv files with subtitle files into .mp4 files with the subs hardcoded.

    


    Right now, the script I use is

    


    ffmpeg -i input.mkv -vf subtitles=input.mkv output.mp4



    


    This is fine, but it means I can only convert them one at a time, and it's kind of a hassle because it means I have to fiddle with it every few minutes to set up the next one.

    


    I have another script I use for converting .flac files to .mp3 files, which is

    


    @ECHO OFF

FOR %%f IN (*.flac) DO (
echo Converting: %%f
ffmpeg -i "%%f" -ab 320k -map_metadata 0 "%%~nf.mp3"
)

echo Finished

PAUSE



    


    Running that converts every single .flac folder into an .mp3 equivalent, with the same filename and everything.

    


    I've tried to combine the above scripts into something like this :

    


    @ECHO OFF

FOR %%f IN (*.mkv) DO (
echo Converting: %%f
ffmpeg -i "%%f" -vf subtitles=%%f "%%~nf.mp4"
)

echo Finished

PAUSE


    


    but every time I do so, it returns errors like "invalid argument" or "unable to find a suitable output type", or "error initializing filters", or "2 frames left in the queue on closing" or something along those lines. I've swapped out subtitles=%%f for "subtitles-%%f" or subtitles="%%f.mkv" and so on and so forth, and none of those give me what I want either. Sometimes it creates Empty .mp4 file containers with nothing in them, sometimes it does nothing at all.

    


    I don't really understand what exactly is happening under the hood in that flac->mp3 code, because I grabbed it from a different stackoverflow post years ago. All I know is that trying to copy that code and repurpose it into something else doesn't work. Is this just an issue where I've fucked up the formatting of the code and not realized it, or is this a "ffmpeg can't actually do that because of a weird technical issue" thing ?

    


    I also tried the code listed here, when Stackoverflow listed that as a possible duplicate, but that gave me similar errors, and I don't really understand why !

    


    Also, if it's relevant, I'm running windows.

    


  • Construct fictitious P-frames from just I-frames [closed]

    25 juillet 2024, par nilgirian

    Some context.. I saw this video recently https://youtu.be/zXTpASSd9xE?si=5alGvZ_e13w0Ahmb it's a continuous zoom into a fractal.

    


    I've been thinking a whole lot of how did they created this video 9 years ago ? The problem is that these frames are mathematically intensive to calculate back then and today still fairly really hard now.

    


    He states in the video it took him 33 hours to generate 1 keyframe.

    


    I was wondering how I would replicate that work. I know by brute force I can generate several images files (essentially each image would be an I-frame) and then ask ffmpeg to compress it into mp4 (where it will convert most of those images into P-frames). I know that. But if I did it that way I calculated it'd take me 6.5 years to render that 9min video (at 30fps, 9 years ago).

    


    So I imagine he only generated I-frames to cut down on time. And then this person somehow created fictitious P-frames in-between. Given that frame-to-frame are similar this seems like it should be doable since you're just zooming in. If he only generated just the I-frames at every 1 second (at 30fps) that work could be cut down to just 82 days.

    


    So if I only want to generate the images that will be used as I-frames could ffmpeg or some other program just automatically make a best guess to generate fictitious P-frames for me ?