
Recherche avancée
Médias (39)
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Corona Radiata
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (72)
-
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (6890)
-
Using libav to encode RGBA frames into MP4 but the output is a mess
5 octobre 2019, par Cu2SI’m trying to decode a video into RGB frames, and then postprocess the frames, finally encode the frames into a video. But the output video is completely a mess :
I wrote a minimal example to illustrate my idea. First, I read some information from some source video :
AVFormatContext* inputFormatCtx = nullptr;
int ret = avformat_open_input(&inputFormatCtx, inputParamsVideo, nullptr, nullptr);
assert(ret >= 0);
ret = avformat_find_stream_info(inputFormatCtx, NULL);
av_dump_format(inputFormatCtx, 0, inputParamsVideo, 0);
assert(ret >= 0);
AVStream* inputVideoStream = nullptr;
for (int i = 0; i < inputFormatCtx->nb_streams; i++)
{
const auto inputStream = inputFormatCtx->streams[i];
if (inputStream->codec->codec_type == AVMEDIA_TYPE_VIDEO)
{
inputVideoStream = inputStream;
break;
}
}
assert(inputVideoStream != nullptr);
AVCodecParameters* inputParams = inputVideoStream->codecpar;
AVRational framerate = inputVideoStream->codec->framerate;
auto gop_size = inputVideoStream->codec->gop_size;
auto maxBFrames = inputVideoStream->codec->max_b_frames;Then I assign the information to the output stream :
AVFormatContext *outputAVFormat = nullptr;
avformat_alloc_output_context2(&outputAVFormat, nullptr, nullptr, kOutputPath);
assert(outputAVFormat);
AVCodec* codec = avcodec_find_encoder(outputAVFormat->oformat->video_codec);
assert(codec);
AVCodecContext* encodingCtx = avcodec_alloc_context3(codec);
avcodec_parameters_to_context(encodingCtx, inputParams);
encodingCtx->time_base = av_inv_q(framerate);
encodingCtx->max_b_frames = maxBFrames;
encodingCtx->gop_size = gop_size;
if (outputAVFormat->oformat->flags & AVFMT_GLOBALHEADER)
encodingCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
AVStream* outStream = avformat_new_stream(outputAVFormat, nullptr);
assert(outStream != nullptr);
ret = avcodec_parameters_from_context(outStream->codecpar, encodingCtx);
assert(ret >= 0);
outStream->time_base = encodingCtx->time_base;Then I convert RGBA frames(which is read from files) into YUV420P via
sws_scale
, and encoding :ret = avcodec_open2(encodingCtx, codec, nullptr);
assert(ret >= 0);
av_dump_format(outputAVFormat, 0, kOutputPath, 1);
ret = avio_open(&outputAVFormat->pb, kOutputPath, AVIO_FLAG_WRITE);
assert(ret >= 0);
ret = avformat_write_header(outputAVFormat, nullptr);
assert(ret >= 0);
AVFrame* frame = av_frame_alloc();
frame->width = inputParams->width;
frame->height = inputParams->height;
frame->format = inputParams->format;
frame->pts = 0;
assert(ret >= 0);
ret = av_frame_get_buffer(frame, 32);
int frameCount = 0;
assert(ret >= 0);
ret = av_frame_make_writable(frame);
assert(ret >= 0);
SwsContext* swsContext = sws_getContext(inputParams->width, inputParams->height,
AV_PIX_FMT_RGBA, frame->width,
frame->height, static_cast<avpixelformat>(inputParams->format),
SWS_BILINEAR, NULL, NULL, NULL);
for (auto inputPicPath : std::filesystem::directory_iterator(kInputDir))
{
int width, height, comp;
unsigned char* data = stbi_load(inputPicPath.path().string().c_str(), &width, &height, &comp, 4);
int srcStrides[1] = { 4 * width };
int ret = sws_scale(swsContext, &data, srcStrides, 0, height, frame->data,
frame->linesize);
assert(ret >= 0);
frame->pts = frameCount;
//frame->pict_type = AV_PICTURE_TYPE_I;
frameCount += 1;
encode(encodingCtx, frame, 0, outputAVFormat);
stbi_image_free(data);
}
while (encode(encodingCtx, nullptr, 0, outputAVFormat))
{
;
}
static bool encode(AVCodecContext* enc_ctx, AVFrame* frame, std::uint32_t streamIndex,
AVFormatContext * formatCtx)
{
int ret;
int got_output = 0;
AVPacket packet = {};
av_init_packet(&packet);
ret = avcodec_encode_video2(enc_ctx, &packet, frame, &got_output);
assert(ret >= 0);
if (got_output) {
packet.stream_index = streamIndex;
av_packet_rescale_ts(&packet, enc_ctx->time_base, formatCtx->streams[streamIndex]->time_base);
ret = av_interleaved_write_frame(formatCtx, &packet);
assert(ret >= 0);
return true;
}
else {
return false;
}
}
</avpixelformat>Finally I cleaned up stuff :
av_write_trailer(outputAVFormat);
sws_freeContext(swsContext);
avcodec_free_context(&encodingCtx);
avio_closep(&outputAVFormat->pb);
avformat_free_context(outputAVFormat);
av_frame_free(&frame);I dumped my input format and my output format :
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'H:\Me.MP4':
Metadata:
major_brand : mp42
minor_version : 1
compatible_brands: mp41mp42isom
creation_time : 2019-04-03T05:44:22.000000Z
Duration: 00:00:06.90, start: 0.000000, bitrate: 1268 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 540x960, 1238 kb/s, 29.86 fps, 30 tbr, 600 tbn, 1200 tbc (default)
Metadata:
creation_time : 2019-04-03T05:44:22.000000Z
handler_name : Core Media Video
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, stereo, fltp, 24 kb/s (default)
Metadata:
creation_time : 2019-04-03T05:44:22.000000Z
handler_name : Core Media Audio
[libx264 @ 000002126F90C1C0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 000002126F90C1C0] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 000002126F90C1C0] 264 - core 157 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=2 keyint=12 keyint_min=1 scenecut=40 intra_refresh=0 rc_lookahead=12 rc=abr mbtree=1 bitrate=1238 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to './output.mp4':
Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 540x960, q=2-31, 1238 kb/s, 29.86 tbnUpdate :
After I deleted
encodingCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
the output video is right. Also, outputting avi works, too.
-
Manual encoding into MPEG-TS
4 juillet 2014, par LaneSO...
I am trying to take a H264 Annex B byte stream video and encode it into MPEG-TS in pure Java. My goals is to create a minimal MPEG-TS, Single Program, valid stream and to not include any timing information information (PCR, PTS, DTS).
I am currently at the point where my generated file can be passed to ffmpeg (ffmpeg -i myVideo.ts) and ffmpeg reports...
[NULL @ 0x7f8103022600] start time is not set in estimate_timings_from_pts
Input #0, mpegts, from 'video.ts':
Duration: N/A, bitrate: N/A
Program 1
Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc...it seems like this warning for start time is not a big deal... and ffmpeg is unable to determine how long the video is. If I create another mpeg-ts file from my video file (ffmpeg -i myVideo.ts -vcodec copy validVideo.ts) and run ffmpeg -i validVideo.ts I get...
Input #0, mpegts, from 'video2.ts':
Duration: 00:00:11.61, start: 1.400000, bitrate: 3325 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc...so you can see the timing information and bitrate is there and so is the metadata.
My H264 video is comprised of only I and P Frames (with the SPS and PPS preceding the I Frame of course) and the way that I am creating my MPEG-TS stream is...
- Write a single PAT at the beginning of the file
- Write a single PMT at the beginning of the file
- Create TS and PES packets from SPS, PPS and I Frame (AUD NALs too, if this is required ?)
- Create TS and PES packets from P Frame (again, AUD NALs too, if required)
- For the last payload of either an I Frame or P Frame, add filler bytes to an adaptation field to make sure it fits into a full TS packet
- Repeat 3-5 for the entire file
...my PAT looks like this...
4740 0010 0000 b00d 0001 c100 0000 01f0
002a b104 b2ff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff...and my PMT looks like this...
4750 0010
0002 b012 0001 c100 00ff fff0 001b e100
f000 c15b 41e0 ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff...notice after the c100 00, the "ff ff", f0... says that we are not using a PCR... Also notice that I have updated my CRC to reflect this change to the PMT. My first I Frame packet looks like...
4741 0010 0000 01e0
0000 8000 0000 0000 0109 f000 0000 0127
4d40 288d 8d60 2802 dd80 b501 0101 4000
00fa 4000 3a98 3a18 00b7 2000 3380 2ef2
e343 0016 e400 0670 05de 5c16 345d c000
0000 0128 ee3c 8000 0000 0165 8880 0020
0000 4fe5 63b5 4e90 b11c 9f8f f891 10f3
13b1 666b 9fc6 03e9 e321 36bf 1788 347b
eb23 fc89 5772 6e2e 1714 96df ed16 9b30
252d ceb7 07e9 a0c7 c6e7 9515 be87 2df1
81f3 b9d2 ba5f 243e 2d5c cba2 8ca5 b798
6bec 8c43 0b5d bbda bc5b 6e7c e15c 84e8
2f13 be84...you’ll notice after the 01e0 0000, 8000 00 is the PES header extension where I specify no PTS / DTS and the remaining length is zero. My first P Frame packet looks like...
4741 001d
0000 01e0 0000 8000 0000 0000 0109 f000
0000 0141 9a00 0200 0593 ff45 a7ae 1acd
f2d7 f9ec 557f cdb6 ba38 60d6 a626 5edb
4bb9 9783 89e2 d7e1 102e 4625 2fbf ce16
f952 d8c9 f027 e55a 6b2a 81c3 48d4 6a45
050a f355 fbec db01 6562 6405 04aa e011
50ec 0b45 45e5 0df7 2fed a3f8 ac13 2e69
6739 6d81 f13d 2455 e6ca 1c6b dc96 65d5
3bad f250 7dab 42e4 7ba9 f564 ee61 29fb
1b2c 974c 6924 1a1f 99ef 063c b99a c507
8c22 b0f8 b14c 3e4d 01d0 6120 4e19 8725
2fda 6550 f907 3f87...and whenever an I Frame or P Frame is ending, I have a TS packet with an adaptation field like...
4701 003c b000 ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff ffff ffff
ffff ffff ffff ffff ffff ffff...where the first b0 bytes are the adaptation field stuffing bytes and the remaining ones are the final bytes of the I or P Frame. So as you can tell I can use ffmpeg and pass it my file to create a valid movie in any format. However, I need the file I create to be in the proper format and I cannot quite figure out what the last piece I am missing is. Any ideas ?
-
ffmpeg Command in Docker with Rust Tokio Closes Warp Server Connection (curl 52 Error)
3 juin 2024, par user762345I’m encountering an issue where executing an ffmpeg concatenation command through Rust’s Tokio process in a Docker container causes subsequent HTTP requests to fail. The error occurs exclusively after running the ffmpeg command and making immediate requests, resulting in a “curl 52 empty response from server” error with the connection being closed. Notably, this issue does not occur when running the same setup outside of Docker. Additionally, if no HTTP requests are made after the ffmpeg command, the curl 52 error does not occur.


Here is the verbose curl output of my minimum reproducible example (see below).


curl -v "http://localhost:3030"
* Trying 127.0.0.1:3030...
* Connected to localhost (127.0.0.1) port 3030 (#0)
> GET / HTTP/1.1
> Host: localhost:3030
> User-Agent: curl/8.1.2
> Accept: */*
> 
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server



Here are Docker logs from my minimum reproducible example (see below). The wav files are concatenated successfully, then the container appears to rebuild.


[2024-06-03T05:26:58Z INFO minimal_docker_webserver_post_error] Starting server on 0.0.0.0:3030
[2024-06-03T05:26:58Z INFO warp::server] Server::run; addr=0.0.0.0:3030
[2024-06-03T05:26:58Z INFO warp::server] listening on http://0.0.0.0:3030
[2024-06-03T05:27:07Z INFO minimal_docker_webserver_post_error] WAV files concatenated successfully
[Running 'cargo run']
 Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.06s
 Running `target/debug/minimal_docker_webserver_post_error`
[2024-06-03T05:27:08Z INFO minimal_docker_webserver_post_error] Starting server on 0.0.0.0:3030
[2024-06-03T05:27:08Z INFO warp::server] Server::run; addr=0.0.0.0:3030
[2024-06-03T05:27:08Z INFO warp::server] listening on http://0.0.0.0:3030



What I have tried :
I tried using different web frameworks (Warp, Actix-web) and request crates (reqwest, ureq). I also tried running the setup outside of Docker, which worked as expected without any issues. Additionally, I tried running the setup in Docker without making any HTTP requests after the ffmpeg command, and the connection closed successfully without errors. I also tried posting to httpbin with a minimal request, but the issue persisted.


Minimum reproducible example :


main.rs


use warp::Filter;
use reqwest::Client;
use std::convert::Infallible;
use log::{info, error};
use env_logger;
use tokio::process::Command;

#[tokio::main]
async fn main() {
 std::env::set_var("RUST_LOG", "debug");
 env_logger::init();

 let route = warp::path::end()
 .and_then(handle_request);

 info!("Starting server on 0.0.0.0:3030");
 warp::serve(route)
 .run(([0, 0, 0, 0], 3030))
 .await;
}

async fn handle_request() -> Result<impl infallible="infallible"> {
 let client = Client::new();

 let output = Command::new("ffmpeg")
 .args(&[
 "y",
 "-i", "concat:/usr/src/minimal_docker_webserver_post_error/file1.wav|/usr/src/minimal_docker_webserver_post_error/file2.wav",
 "-c", "copy",
 "/usr/src/minimal_docker_webserver_post_error/combined.wav"
 ])
 .output()
 .await;

 match output {
 Ok(output) => {
 if output.status.success() {
 info!("WAV files concatenated successfully");
 } else {
 error!("Failed to concatenate WAV files: {:?}", output);
 return Ok(warp::reply::with_status("Failed to concatenate WAV files", warp::http::StatusCode::INTERNAL_SERVER_ERROR));
 }
 },
 Err(e) => {
 error!("Failed to execute ffmpeg: {:?}", e);
 return Ok(warp::reply::with_status("Failed to execute ffmpeg", warp::http::StatusCode::INTERNAL_SERVER_ERROR));
 }
 }

 // ISSUE: Connection closes with curl: (52) Empty reply from server
 match client.get("https://httpbin.org/get").send().await {
 Ok(response) => info!("GET request successful: {:?}", response),
 Err(e) => error!("GET request failed: {:?}", e),
 }

 match client.post("https://httpbin.org/post")
 .body("field1=value1&field2=value2")
 .send().await {
 Ok(response) => info!("POST request successful: {:?}", response),
 Err(e) => error!("POST request failed: {:?}", e),
 }

 Ok(warp::reply::with_status("Request handled", warp::http::StatusCode::OK))
}
</impl>


FFMPEG command to generate the two wav files for concatenation


ffmpeg -f lavfi -i "sine=frequency=1000:duration=5" file1.wav && ffmpeg -f lavfi -i "sine=frequency=500:duration=5" file2.wav



Dockerfile


# Use the official Rust image as the base image
FROM rust:latest

# Install cargo-watch
RUN cargo install cargo-watch

# Install ffmpeg
RUN apt-get update && apt-get install -y ffmpeg

# Set the working directory inside the container
WORKDIR /usr/src/minimal_docker_webserver_post_error

# Copy the Cargo.toml and Cargo.lock files
COPY Cargo.toml Cargo.lock ./

# Copy the source code
COPY src ./src

# Copy wav files
COPY file1.wav /usr/src/minimal_docker_webserver_post_error/file1.wav
COPY file2.wav /usr/src/minimal_docker_webserver_post_error/file2.wav

# Install dependencies
RUN cargo build --release

# Expose the port that the application will run on
EXPOSE 3030

# Set the entry point to use cargo-watch
CMD ["cargo", "watch", "-x", "run"]



Cargo.toml


[package]
name = "minimal_docker_webserver_post_error"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
warp = "0.3"
reqwest = { version = "0.12.4", features = ["json"] }
tokio = { version = "1", features = ["full"] }
log = "0.4"
env_logger = "0.11.3"



Making the request to the warp server


curl -v "http://localhost:3030"