
Recherche avancée
Médias (29)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (78)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (12275)
-
MoviePy write_videofile is very slow [closed]
3 novembre 2024, par RukshanJSI've seen multiple questions on SO relating with this, but couldn't find a solid answer. The following is my code.


async def write_final_video(clip, output_path, results_dir):
 cpu_count = psutil.cpu_count(logical=False)
 threads = max(1, min(cpu_count - 1, 16))

 os.makedirs(results_dir, exist_ok=True)

 output_params = {
 "codec": await detect_hardware_encoder(),
 "audio_codec": "aac",
 "fps": 24,
 "threads": threads,
 "preset": "medium",
 "bitrate": "5000k",
 "audio_bitrate": "192k",
 }

 logger.info(f"Starting video writing process with codec: {output_params['codec']}")
 try:
 await asyncio.to_thread(
 clip.write_videofile,
 output_path,
 **output_params,
 )
 except Exception as e:
 logger.error(f"Error during video writing with {output_params['codec']}: {str(e)}")
 logger.info("Falling back to libx264 software encoding")
 output_params["codec"] = "libx264"
 output_params["preset"] = "medium"
 try:
 await asyncio.to_thread(
 clip.write_videofile,
 output_path,
 **output_params,
 )
 except Exception as e:
 logger.error(f"Error during fallback video writing: {str(e)}")
 raise
 finally:
 logger.info("Video writing process completed")

 # Calculate and return the relative path
 relative_path = os.path.relpath(output_path, start=os.path.dirname(ARTIFACTS_DIR))
 return relative_path



and the helper function to get encoder is below


async def detect_hardware_encoder():
 try:
 result = await asyncio.to_thread(
 subprocess.run,
 ["ffmpeg", "-encoders"],
 capture_output=True,
 text=True
 )

 # Check for hardware encoders in order of preference
 if "h264_videotoolbox" in result.stdout:
 return "h264_videotoolbox"
 elif "h264_nvenc" in result.stdout:
 return "h264_nvenc"
 elif "h264_qsv" in result.stdout:
 return "h264_qsv"

 return "libx264" # Default software encoder
 except Exception:
 logger.warning("Failed to check for hardware acceleration. Using default encoder.")
 return "libx264"



This code makes the rendering of a 15s video around 6min+ which is not acceptable.


t: 62%|██████▏ | 223/361 [04:40<03:57, 1.72s/it, now=None]


My config is MPS (Apple Silicon Metal Performance Shader), but also should work with NVIDIA CUDA.


Update :
Question :
How can i reduce the time to write the video.


-
FFMPEG C Library : Encoding h264 stream into Matroska .mkv on MacOS
2 mai 2024, par KaisoHHHFFMPEG C Library : Encoding h264 stream into Matroska .mkv container creates corrupt files


Above is the related question, and the answer of allocating memory for extradata of codec context works.


But on MacOS, with the encoder videotoolbox, this approach creates corrupted video file for any container, including mp4 and mkv. But what it does better for mkv is that at least the corrupted file is not 0 size.


I was wondering what is the correct way to encode stream to mkv on MacOS ?


Below is my code initiating the outputcodec


auto path = previewPath.toStdString(); // Convert file path to standard string

 // qInfo() << "[test0306] output path is " << _filePath;

 // Allocate the output media context
 avformat_alloc_output_context2(&_outputFormat, NULL, NULL, path.c_str());
 if (!_outputFormat) {
 printf("Could not deduce output format from file extension: using MPEG.\n");
 avformat_alloc_output_context2(&_outputFormat, NULL, "mp4", path.c_str());
 }
 if (!_outputFormat) {
 emit cacheError(previewPath, "Could not create output context\n");
 return;
 }

 // Find the video encoder
 auto codecName = metaData["encoderId"].toString().toStdString();
 const AVCodec *codec = avcodec_find_encoder_by_name(codecName.c_str());
 qDebug() << "[test0307] this is the meta data encoder info: " << codecName;

 if (!codec) {
 qDebug() << "Cannot find encoder by name";
 emit cacheError(previewPath, "Cannot find encoder by name");
 return;
 }

 // Set up the sar, time base and frame rate from metadata
 // AVRational timeBase = {metaData["timeBaseNum"].toInt(), metaData["timeBaseDen"].toInt()};
 AVRational timeBase = {metaData["outputTimeBaseNum"].toInt(), metaData["outputTimeBaseDen"].toInt()};
 // AVRational framerate = {metaData["framerateNum"].toInt(), metaData["framerateDen"].toInt()};
 AVRational framerate = av_d2q(metaData["outputFps"].toDouble(), 100000);
 AVRational sar = {metaData["sarNum"].toInt(), metaData["sarDen"].toInt()};
 AVRational avg_frame_rate = {metaData["avg_frame_rate_num"].toInt(), metaData["avg_frame_rate_den"].toInt()};
 AVRational r_frame_rate = {metaData["r_frame_rate_num"].toInt(), metaData["r_frame_rate_den"].toInt()};

 qInfo() << "Diskcache is gonna cache video on fps" << metaData["outputFps"].toDouble();
 // Create a new stream in the output file
 AVStream* outSt = avformat_new_stream(_outputFormat, codec);
 if (!outSt) {
 emit cacheError(previewPath, "Failed to create output stream\n");
 return;
 }
 // outSt->time_base = timeBase;
 // outSt->id = _outputFormat->nb_streams - 1;

 // Allocate the codec context for the encoder
 _outputCodecContext = avcodec_alloc_context3(codec);
 if (!_outputCodecContext) {
 emit cacheError(previewPath, "Could not alloc an encoding context\n");
 return;
 }

 // Configure the codec context
 _outputCodecContext->codec_id = codec->id;
 _outputCodecContext->codec_type = codec->type;
 // if(metaData["bitrate"].toInt() > 0){
 _outputCodecContext->bit_rate = metaData["bitrate"].toInt();
 // }
 // else{
 // _outputCodecContext->bit_rate = 80000;
 // }
 _outputCodecContext->max_b_frames = 0;
 _outputCodecContext->width = metaData["outputWidth"].toInt();
 _outputCodecContext->height = metaData["outputHeight"].toInt();
 _outputCodecContext->framerate = framerate; // Set from metadata
 _outputCodecContext->time_base = timeBase;//timeBase; // Set from metadata
 _outputCodecContext->pix_fmt = AV_PIX_FMT_YUV420P; // img need yuv420p

 outSt->time_base = _outputCodecContext->time_base;
 outSt->avg_frame_rate = framerate;
 outSt->r_frame_rate = framerate;
 outSt->disposition = metaData["disposition"].toInt();
 outSt->discard = static_cast<avdiscard>(metaData["discard"].toInt());
 outSt->sample_aspect_ratio = sar;
 outSt->event_flags = metaData["event_flags"].toInt();
 outSt->pts_wrap_bits = metaData["pts_wrap_bits "].toInt();

 // // codecpar->extradata needed for MKV, this break mac playing back to preview
#ifdef Q_OS_WINDOWS
 _outputCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 _outputCodecContext->extradata = (uint8_t*)av_mallocz(sizeof(int) * sizeof(int) * sizeof(int) / 2);
 _outputCodecContext->extradata_size = sizeof(int) * sizeof(int) * sizeof(int) / 2;
#elif defined(Q_OS_MACOS)

#endif


 // Open the codec
 AVDictionary *opt = NULL;
 int ret = avcodec_open2(_outputCodecContext, codec, &opt);
 av_dict_free(&opt);
 if (ret < 0) {
 emit cacheError(previewPath, "Could not open video codec\n");
 return;
 }

 // Copy codec parameters from context to the stream
 ret = avcodec_parameters_from_context(outSt->codecpar, _outputCodecContext);
 if (ret < 0) {
 emit cacheError(previewPath, "Could not copy the stream parameters\n");
 return;
 }

 // Print format details
 av_dump_format(_outputFormat, 0, path.c_str(), 1);

 // Open the output file
 ret = avio_open(&_outputFormat->pb, path.c_str(), AVIO_FLAG_WRITE);
 if (ret < 0) {
 emit cacheError(previewPath, "Could not open output file\n");
 return;
 }

 // Write the stream header
 AVDictionary *params = NULL;
 //av_dict_set(&params, "movflags", "frag_keyframe+empty_moov+delay_moov+use_metadata_tags+write_colr", 0);
 ret = avformat_write_header(_outputFormat, NULL);
 if (ret < 0) {
 char err_buf[AV_ERROR_MAX_STRING_SIZE]; // Define a buffer for error strings.
 if (av_strerror(ret, err_buf, sizeof(err_buf)) == 0) { // Safely get the error string.
 qInfo() << "write header error:" << err_buf; // Now use the buffer for logging.
 emit cacheError(previewPath, "Error occurred when opening output file", ret);
 } else {
 qInfo() << "write header error: Unknown error with code" << ret;
 emit cacheError(previewPath, "Error occurred when opening output file: Unknown error", ret);
 }
 return;

 }
</avdiscard>


I have tried to limit this approach only on windows and waiting to find a solution to do it with videotoolbox on MacOS.


-
Why doesn't the ffmpeg output display the stream in the browser ? [closed]
10 mai 2024, par TebyyWhy is it that when I create a livestream in Python using ffmpeg, and then I open the browser and visit the page, the page keeps loading continuously, and in PyCharm logs, I see binary data ? There are no errors displayed, and the code seems correct to me. I even tried saving to a file for testing purposes, and when I play the video, everything works fine. Does anyone know what might be wrong here ?


Code :


def generate_frames():
 cap = cv2.VideoCapture(os.path.normpath(app_root_dir().joinpath("data/temp", "video-979257305707693982.mp4")))
 while cap.isOpened():
 ret, frame = cap.read()
 if not ret:
 break

 yield frame


@app.route('/video_feed')
def video_feed():
 ffmpeg_command = [
 'ffmpeg', '-f', 'rawvideo', '-pix_fmt', 'bgr24',
 '-s:v', '1920x1080', '-r', '60',
 '-i', '-', '-vf', 'setpts=2.5*PTS', # Video Speed
 '-c:v', 'libvpx-vp9', '-g', '60', '-keyint_min', '60',
 '-b:v', '6M', '-minrate', '4M', '-maxrate', '12M', '-bufsize', '8M',
 '-crf', '0', '-deadline', 'realtime', '-tune', 'psnr', '-quality', 'good',
 '-tile-columns', '6', '-threads', '8', '-lag-in-frames', '16',
 '-f', 'webm', '-'
 ]
 ffmpeg_process = subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=-1)
 frames_generator = generate_frames()
 for frame in frames_generator:
 ffmpeg_process.stdin.write(frame)
 ffmpeg_process.stdin.flush()

 ffmpeg_process.stdin.close()
 ffmpeg_process.wait()

 def generate_video_stream(process):
 startTime = time.time()
 buffer = []
 sentBurst = False
 for chunk in iter(lambda: process.stderr.read(4096), b''):
 buffer.append(chunk)

 # Minimum buffer time, 3 seconds
 if sentBurst is False and time.time() > startTime + 3 and len(buffer) > 0:
 sentBurst = True
 for i in range(0, len(buffer) - 2):
 print("Send initial burst #", i)
 yield buffer.pop(0)

 elif time.time() > startTime + 3 and len(buffer) > 0:
 yield buffer.pop(0)

 process.poll()
 if isinstance(process.returncode, int):
 if process.returncode > 0:
 print('FFmpeg Error', process.returncode)

 break

 return Response(stream_with_context(generate_video_stream(ffmpeg_process)), mimetype='video/webm', content_type="video/webm; codecs=vp9", headers=Headers([("Connection", "close")]))