
Recherche avancée
Médias (1)
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (15)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...)
Sur d’autres sites (3637)
-
Error using FFmpeg.wasm for audio files in react : "ffmpeg.FS('readFile', 'output.mp3') error. Check if the path exists"
25 février 2021, par Rayhan MemonI'm currently building a browser-based audio editor and I'm using ffmpeg.wasm (a pure WebAssembly/JavaScript port of FFmpeg) to do it.


I'm using this excellent example, which allows you to uploaded video file and convert it into a gif :


import React, { useState, useEffect } from 'react';
import './App.css';

import { createFFmpeg, fetchFile } from '@ffmpeg/ffmpeg';
const ffmpeg = createFFmpeg({ log: true });

function App() {
 const [ready, setReady] = useState(false);
 const [video, setVideo] = useState();
 const [gif, setGif] = useState();

 const load = async () => {
 await ffmpeg.load();
 setReady(true);
 }

 useEffect(() => {
 load();
 }, [])

 const convertToGif = async () => {
 // Write the file to memory 
 ffmpeg.FS('writeFile', 'test.mp4', await fetchFile(video));

 // Run the FFMpeg command
 await ffmpeg.run('-i', 'test.mp4', '-t', '2.5', '-ss', '2.0', '-f', 'gif', 'out.gif');

 // Read the result
 const data = ffmpeg.FS('readFile', 'out.gif');

 // Create a URL
 const url = URL.createObjectURL(new Blob([data.buffer], { type: 'image/gif' }));
 setGif(url)
 }

 return ready ? (
 
 <div classname="App">
 { video && 

 }


 <input type="file" />> setVideo(e.target.files?.item(0))} />

 <h3>Result</h3>

 <button>Convert</button>

 { gif && <img src="http://stackoverflow.com/feeds/tag/{gif}" width="250" style='max-width: 300px; max-height: 300px' />}

 </div>
 )
 :
 (
 <p>Loading...</p>
 );
}

export default App;



I've modified the above code to take an mp3 file recorded in the browser (recorded using the npm package 'mic-recorder-to-mp3' and passed to this component as a blobURL in the global state) and do something to it using ffmpeg.wasm :


import React, { useContext, useState, useEffect } from 'react';
import Context from '../../store/Context';
import Toolbar from '../Toolbar/Toolbar';
import AudioTranscript from './AudioTranscript';

import { createFFmpeg, fetchFile } from '@ffmpeg/ffmpeg';

//Create ffmpeg instance and set 'log' to true so we can see everything
//it does in the console
const ffmpeg = createFFmpeg({ log: true });

const AudioEditor = () => {
 //Setup Global State and get most recent recording
 const { globalState } = useContext(Context);
 const { blobURL } = globalState;

 //ready flag for when ffmpeg is loaded
 const [ready, setReady] = useState(false);

 const [outputFileURL, setOutputFileURL] = useState('');

 //Load FFmpeg asynchronously and set ready when it's ready
 const load = async () => {
 await ffmpeg.load();
 setReady(true);
 }

 //Use UseEffect to run the 'load' function on mount
 useEffect(() => {
 load();
 }, []);

 const ffmpegTest = async () => {
 //must first write file to memory as test.mp3
 ffmpeg.FS('writeFile', 'test.mp3', await fetchFile(blobURL));

 //Run the FFmpeg command
 //in this case, trim file size down to 1.5s and save to memory as output.mp3
 ffmpeg.run('-i', 'test.mp3', '-t', '1.5', 'output.mp3');

 //Read the result from memory
 const data = ffmpeg.FS('readFile', 'output.mp3');

 //Create URL so it can be used in the browser
 const url = URL.createObjectURL(new Blob([data.buffer], { type: 'audio/mp3' }));
 setOutputFileURL(url);
 }

 return ready ? ( 
 <div>
 <audiotranscript></audiotranscript>
 <toolbar></toolbar>
 <button>
 Edit
 </button>
 {outputFileURL && 
 
 }
 </div>
 ) : (
 <div>
 Loading...
 </div>
 )
}

export default AudioEditor;



This code returns the following error when I press the edit button to call the ffmpegTest function :



I've experimented, and when I tweak the culprit line of code to :


const data = ffmpeg.FS('readFile', 'test.mp3');



the function runs without error, simply returning the input file. So I assume there must be something wrong with ffmpeg.run() line not storing 'output.mp3' in memory perhaps ? I can't for the life of me figure out what's going on...any help would be appreciated !


-
FFMPEG using AV_PIX_FMT_D3D11 gives "Error registering the input resource" from NVENC
13 novembre 2024, par nbabcockInput frames start on the GPU as
ID3D11Texture2D
pointers.

I encode them to H264 using FFMPEG + NVENC. NVENC works perfectly if I download the textures to CPU memory as format
AV_PIX_FMT_BGR0
, but I'd like to cut out the CPU texture download entirely, and pass the GPU memory pointer directly into the encoder in native format. I write frames like this :

int write_gpu_video_frame(ID3D11Texture2D* gpuTex, AVFormatContext* oc, OutputStream* ost) {
 AVFrame *hw_frame = ost->hw_frame;

 printf("gpuTex address = 0x%x\n", &gpuTex);

 hw_frame->data[0] = (uint8_t *) gpuTex;
 hw_frame->data[1] = (uint8_t *) (intptr_t) 0;
 hw_frame->pts = ost->next_pts++;

 return write_frame(oc, ost->enc, ost->st, hw_frame);
 // write_frame is identical to sample code in ffmpeg repo
}



Running the code with this modification gives the following error :


gpuTex address = 0x4582f6d0
[h264_nvenc @ 00000191233e1bc0] Error registering an input resource: invalid call (9):
[h264_nvenc @ 00000191233e1bc0] Could not register an input HW frame
Error sending a frame to the encoder: Unknown error occurred




Here's some supplemental code used in setting up and configuring the hw context and encoder :


/* A few config flags */
#define ENABLE_NVENC TRUE
#define USE_D3D11 TRUE // Skip downloading textures to CPU memory and send it straight to NVENC



/* Init hardware frame context */
static int set_hwframe_ctx(AVCodecContext* ctx, AVBufferRef* hw_device_ctx) {
 AVBufferRef* hw_frames_ref;
 AVHWFramesContext* frames_ctx = NULL;
 int err = 0;

 if (!(hw_frames_ref = av_hwframe_ctx_alloc(hw_device_ctx))) {
 fprintf(stderr, "Failed to create HW frame context.\n");
 throw;
 }
 frames_ctx = (AVHWFramesContext*) (hw_frames_ref->data);
 frames_ctx->format = AV_PIX_FMT_D3D11;
 frames_ctx->sw_format = AV_PIX_FMT_NV12;
 frames_ctx->width = STREAM_WIDTH;
 frames_ctx->height = STREAM_HEIGHT;
 //frames_ctx->initial_pool_size = 20;
 if ((err = av_hwframe_ctx_init(hw_frames_ref)) < 0) {
 fprintf(stderr, "Failed to initialize hw frame context. Error code: %s\n", av_err2str(err));
 av_buffer_unref(&hw_frames_ref);
 throw;
 }
 ctx->hw_frames_ctx = av_buffer_ref(hw_frames_ref);
 if (!ctx->hw_frames_ctx)
 err = AVERROR(ENOMEM);

 av_buffer_unref(&hw_frames_ref);
 return err;
}



/* Add an output stream. */
static void add_video_stream(
 OutputStream* ost,
 AVFormatContext* oc,
 const AVCodec** codec,
 enum AVCodecID codec_id,
 int width,
 int height
) {
 AVCodecContext* c;
 int i;
 bool nvenc = false;

 /* find the encoder */
 if (ENABLE_NVENC) {
 printf("Getting nvenc encoder\n");
 *codec = avcodec_find_encoder_by_name("h264_nvenc");
 nvenc = true;
 }
 
 if (!ENABLE_NVENC || *codec == NULL) {
 printf("Getting standard encoder\n");
 avcodec_find_encoder(codec_id);
 nvenc = false;
 }
 if (!(*codec)) {
 fprintf(stderr, "Could not find encoder for '%s'\n",
 avcodec_get_name(codec_id));
 exit(1);
 }

 ost->st = avformat_new_stream(oc, NULL);
 if (!ost->st) {
 fprintf(stderr, "Could not allocate stream\n");
 exit(1);
 }
 ost->st->id = oc->nb_streams - 1;
 c = avcodec_alloc_context3(*codec);
 if (!c) {
 fprintf(stderr, "Could not alloc an encoding context\n");
 exit(1);
 }
 ost->enc = c;

 printf("Using video codec %s\n", avcodec_get_name(codec_id));

 c->codec_id = codec_id;
 c->bit_rate = 4000000;
 /* Resolution must be a multiple of two. */
 c->width = STREAM_WIDTH;
 c->height = STREAM_HEIGHT;
 /* timebase: This is the fundamental unit of time (in seconds) in terms
 * of which frame timestamps are represented. For fixed-fps content,
 * timebase should be 1/framerate and timestamp increments should be
 * identical to 1. */
 ost->st->time_base = {1, STREAM_FRAME_RATE};
 c->time_base = ost->st->time_base;
 c->gop_size = 12; /* emit one intra frame every twelve frames at most */

 if (nvenc && USE_D3D11) {
 const std::string hw_device_name = "d3d11va";
 AVHWDeviceType device_type = av_hwdevice_find_type_by_name(hw_device_name.c_str());

 // set up hw device context
 AVBufferRef *hw_device_ctx;
 // const char* device = "0"; // Default GPU (may be integrated in the case of switchable graphics!)
 const char* device = "1";
 ret = av_hwdevice_ctx_create(&hw_device_ctx, device_type, device, nullptr, 0);

 if (ret < 0) {
 fprintf(stderr, "Could not create hwdevice context; %s", av_err2str(ret));
 }

 set_hwframe_ctx(c, hw_device_ctx);
 c->pix_fmt = AV_PIX_FMT_D3D11;
 } else if (nvenc && !USE_D3D11)
 c->pix_fmt = AV_PIX_FMT_BGR0;
 else
 c->pix_fmt = STREAM_PIX_FMT;

 if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
 /* just for testing, we also add B-frames */
 c->max_b_frames = 2;
 }

 if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
 /* Needed to avoid using macroblocks in which some coeffs overflow.
 * This does not happen with normal video, it just happens here as
 * the motion of the chroma plane does not match the luma plane. */
 c->mb_decision = 2;
 }

 /* Some formats want stream headers to be separate. */
 if (oc->oformat->flags & AVFMT_GLOBALHEADER)
 c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}



-
Gstreamer, x264enc "Redistribute latency..." error
20 août 2021, par JasonI'm trying to setup a video pipeline with very limited bandwidth. I was able to do it with two raspberry pis using the below lines. The first is for the camera pi and the second is to watch stream :


gst-launch-1.0 rpicamsrc preview=false ! 'video/x-h264, width=800, height=600, framerate=30/1' ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000
gst-launch-1.0 udpsrc port=5000 ! gdpdepay ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink sync=false



It works but I go over my bandwidth limit if there is movement. I'm not sure if there is a way to limit bandwidth by setting a parameter here :


'video/x-h264, width=800, height=600, framerate=30/1'



From what I can find online, I have to use something like x264enc. I've followed tutorials but I can't get x264enc to work. it always outputs "Redistribute latency..." on both machines when run and it stays there.


I've tried using x264enc like follows :


gst-launch-1.0 rpicamsrc preview=false ! 'video/x-raw, width=800, height=600, framerate=30/1' ! x264enc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000
gst-launch-1.0 rpicamsrc preview=false ! 'video/x-raw, width=800, height=600, framerate=30/1' ! x264enc tune=zerolatency ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000
gst-launch-1.0 rpicamsrc preview=false ! x264enc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000
gst-launch-1.0 rpicamsrc preview=false ! x264enc tune=zerolatency ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000



Based on tutorials, I would think some of those should work. Other threads say that tune=zerolatency fixes my problem. At least the ones with same output of "Redistribute latency..." I don't know what I'm doing wrong.


Any help would be appreciated. Thanks !