Recherche avancée

Médias (0)

Mot : - Tags -/clipboard

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (68)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (6470)

  • FFmpeg / python - command works when run from shell but fails when run from python

    4 avril 2019, par artembus

    I have a python script which should run an ffmpeg command with this function :

    def transcode(in_path, out_path):
       cmd = ["ffmpeg", "-y", "-i", in_path, '-vf smartblur=lr=1']
       cmd += ["-an", out_path]
       print("Running:", " ".join(cmd))
       subprocess.run(cmd, stdout=cmdout, stderr=cmdout)

    When I run the python script it fails with this ffmpeg error :

    Running: ffmpeg -y -i raid/orig/scenes/train/5786088.mp4 -vf smartblur=lr=1 -an raid/4K/scenes/train/5786088.mp4
    ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
     libavutil      54. 31.100 / 54. 31.100
     libavcodec     56. 60.100 / 56. 60.100
     libavformat    56. 40.101 / 56. 40.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 40.101 /  5. 40.101
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  2.101 /  1.  2.101
     libpostproc    53.  3.100 / 53.  3.100
    Unrecognized option 'vf smartblur=lr=1'.
    Error splitting the argument list: Option not found

    You can see the command it tries to execute in the first line, when I run it in the command line it works fine. When I run the command in the shell it outputs the same version and parameters of the ffmpeg as written in the error above.

    I feel like I missed something simple yet crucial, anyone can point me to the right direction ?

  • How to force A/V sync using mkvmerge and external time-codes ?

    19 avril 2017, par b..

    Background

    I’m working on a project where video and audio are algorithmic interpretations of an MKV source file where I use ffmpeg -ss and -t to extract a particular region of audio and video to separate files. I use scene changes in the video in the audio process (i.e. the audio changes on video scene change), so sync is crucial.

    Audio is 48khz, using 512 sample blocks.
    Video is 23.976fps (I also tried 24).

    I store the frame onset of sceneChanges in a file in terms of cumulative blocks :

    blocksPerFrame = (48000 / 512) / 23.976
    sceneOnsetBlock = sceneOnsetFrame*blocksPerFrame

    I use these blocks in my audio code to treat the samples associated with each scene as a group.

    When I combine the audio and video back together (currently using ffmpeg to generate mp4(v) mp3(a) in an MKV container), the audio and video start off in sync but increasingly drifts until it ends up being 35 seconds off. The worst part is that the audio lag is nonlinear ! By non-linear, I mean that if I plot the lag against the location of that lag in time, I don’t get a line, but what you see in the image below). I can’t just shift or scale the audio to fit the video because of this nonlinearity. I cannot figure out the cause of this nonlinearly increasing audio delay ; I’ve double and triple checked my math.

    Cumulative lag against time

    Since I know the exact timing of scene changes, I should be able to generate "external timecodes" (from the blocks above) for mkvmerge to perfectly sync the output !

    Subquestions :

    1. Is this the best approach (beyond trying to figure out what went wrong in the first place) ? As I’m using my video frames as a
      reference, if I use the scene changes as timecodes for the audio,
      will it force the video to match the audio or vice versa ? I’m much less concerned with the duration than the sync. The video was much more laborious to produce, so I’d rather loose some sound than some frames.

    2. I’m not clear on what numbers to use in the timecodes file.
      According to mkvmerge documentation "For video this is exactly
      one frame, for audio this is one packet of the specific audio type."
      Since I’m using MP3, what is the packet size ? Ideally, I could specify a packetsize (in the audio-encoder ?) that matches my block size (512) to keep things consistent and simple. Can I do this with ffmpeg ?

    Thank you !

  • Android encode video with ffmpeg while it is still recording

    30 décembre 2016, par Andreas Pabst

    I want to develop an android aplication which allows me to continuously record a video and upload parts of the video to a server without stopping the recording.
    It is crucial for the application that I can record up to 60 min without stopping the video.

    Initial approach

    Application consits of two parts :

    1. MediaRecorder which records a video continuously from the camera.
    2. Cutter/Copy - Part : While the video is recorded I have to take out certain segments and send them to a server.
      This part was implemented using http://ffmpeg4android.netcompss.com/
      libffmpeg.so. I used their VideoKit Wrapper which allows me to directly run ffmpeg with any params I need.

    My Problem

    I tried the ffmpeg command with the params

    ffmpeg -ss 00:00:03 -i  -t 00:00:05 -vcodec copy -acodec copy  

    which worked great for me as long as Android’s MediaRecorder finished recording.

    When I execute the same command, while the MediaRecorder is recording the file, ffmpeg exits with the error message "Operation not permitted".

    • I think that the error message doesn’t mean that android prevents the access to the file. I think that ffmpeg needs the "moov-atoms" to find the proper position in the video.

    For that reason I thought of other approaches (which don’t need the moov-atom) :

    1. Create a rtsp stream with android and access the rtsp stream later. The problem is that to my knowledge android SDK doesn’t support the recording to a rtsp stream.
    2. Maybe it is possible to access the camera directly with ffmpeg (/dev/video0 seems to be a video device ?!)
    3. I read about webm as an alternative for streaming, maybe android can record webm streams ?!

    TLDR : Too long didn’t read :

    I want to access a video file with ffmpeg (libffmpeg.so) while it is recording. Fffmpeg exits with the error message "Operation not permitted"

    Goal :

    My goal is to record a video (and audio) and take parts of the video while it is still recording and upload them to the server.

    Maybe you can help me solve the probelm or you have other ideas on how to approach my problem.

    Thanks a lot in advance.