
Recherche avancée
Autres articles (73)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...)
Sur d’autres sites (11503)
-
Capture rtsp stream to MP4 file with wallclock timestamp preserve moov atom
11 avril 2024, par Jax2171We are looking for the right ffmpeg options to capture the rtsp stream from an IP camera using the real time in epoch from the wall clock as a timestamps reference. Our current command is as follows :


ffmpeg -use_wallclock_as_timestamps 1 -rtsp_transport tcp -i rtsp://admin:admin@192.168.5.21/h264/ch1/main/av_stream -c:v copy -c:a aac -copyts -f mp4 -y record.mp4



The command works perfectly well except that, if for any reason the ffmpeg process were to stop abruptly, the output file "record.mp4" is unreadable with error :




moov atom not found




We simulated the abrupt termination with the
pkill -9 ffmpeg
command with the acquisition stream started.

Adding the
-movflags +faststart
option didn't work. The video file cannot be played.

Adding the
-movflags frag_keyframe+empty_moov
option keeps the video file executable but resets the timestamp to start from 0, thus not solving our goal.

PS : The matroska container seems to be the solution, however we encountered strange frequent lag and duplicate frames with the same timestamp. For this reason it was not taken into consideration.


Thanks in advance.


-
How to trim a file with FFMpeg programmatically ? (libavformat, avutils, ...)
5 février 2016, par user361526I’m building an iOS app where re-encoding and trimming a video in the background is necessary.
I can not use iOS libraries (AVFoundation) since they rely on the GPU and no app can access the GPU if it’s backgrounded.
Due to this issue I switched to FFMpeg and compiled it (alongside libx264) and integrated it on my iOS app.
To sum things up what I need is :
- Trim the video for the first 10 seconds
- re-scale the video
After a couple of weeks - and banging my head against the wall quite often - I managed to :
- split the video container into streams (demuxing)
- copy the audio stream into the output stream (no decoding or encoding)
- decode the video stream, run the necessary filters per frame, encode each resulting frame and remux it to the output stream (I decode the h264, filter it, re-encode it back to h264)
If I were to run ffmpeg through the command line I would run it like this :
ffmpeg -i input.MOV -ss 0 -t 10 -vf scale=320:240 -c:v libx264 -preset ultrafast -c:a copy output.mkv
My concern is how to trim the video ? Although I could count the number of video frames that I encode/decode and based on the FPS decide when to stop I cannot do the same with the audio since I’m only demuxing and remuxing it.
Ideally - before scaling the video - I would run a process to trim the video by copying the 10 seconds of each stream (video and audio) into a new video container.
How to I achieve this through the AV libraries ?
-
MoviePy write_videofile is very slow [closed]
3 novembre 2024, par RukshanJSI've seen multiple questions on SO relating with this, but couldn't find a solid answer. The following is my code.


async def write_final_video(clip, output_path, results_dir):
 cpu_count = psutil.cpu_count(logical=False)
 threads = max(1, min(cpu_count - 1, 16))

 os.makedirs(results_dir, exist_ok=True)

 output_params = {
 "codec": await detect_hardware_encoder(),
 "audio_codec": "aac",
 "fps": 24,
 "threads": threads,
 "preset": "medium",
 "bitrate": "5000k",
 "audio_bitrate": "192k",
 }

 logger.info(f"Starting video writing process with codec: {output_params['codec']}")
 try:
 await asyncio.to_thread(
 clip.write_videofile,
 output_path,
 **output_params,
 )
 except Exception as e:
 logger.error(f"Error during video writing with {output_params['codec']}: {str(e)}")
 logger.info("Falling back to libx264 software encoding")
 output_params["codec"] = "libx264"
 output_params["preset"] = "medium"
 try:
 await asyncio.to_thread(
 clip.write_videofile,
 output_path,
 **output_params,
 )
 except Exception as e:
 logger.error(f"Error during fallback video writing: {str(e)}")
 raise
 finally:
 logger.info("Video writing process completed")

 # Calculate and return the relative path
 relative_path = os.path.relpath(output_path, start=os.path.dirname(ARTIFACTS_DIR))
 return relative_path



and the helper function to get encoder is below


async def detect_hardware_encoder():
 try:
 result = await asyncio.to_thread(
 subprocess.run,
 ["ffmpeg", "-encoders"],
 capture_output=True,
 text=True
 )

 # Check for hardware encoders in order of preference
 if "h264_videotoolbox" in result.stdout:
 return "h264_videotoolbox"
 elif "h264_nvenc" in result.stdout:
 return "h264_nvenc"
 elif "h264_qsv" in result.stdout:
 return "h264_qsv"

 return "libx264" # Default software encoder
 except Exception:
 logger.warning("Failed to check for hardware acceleration. Using default encoder.")
 return "libx264"



This code makes the rendering of a 15s video around 6min+ which is not acceptable.


t: 62%|██████▏ | 223/361 [04:40<03:57, 1.72s/it, now=None]


My config is MPS (Apple Silicon Metal Performance Shader), but also should work with NVIDIA CUDA.


Update :
Question :
How can i reduce the time to write the video.