
Recherche avancée
Médias (91)
-
999,999
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Demon seed (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
The four of us are dying (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Corona radiata (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Lights in the sky (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (33)
-
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Configuration spécifique d’Apache
4 février 2011, parModules spécifiques
Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
Création d’un (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (5223)
-
Android MediaRecorder setCaptureRate() and video playback speed
7 novembre 2013, par spitzanatorI've got a MediaRecorder recording video, and I'm very confused by the effect of setCaptureRate().
Specifically, I prepare my MediaRecorder as follows :
mMediaRecorder = new MediaRecorder();
mCamera.stopPreview();
mCamera.unlock();
mMediaRecorder.setCamera(mCamera);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
mMediaRecorder.setProfile(CamcorderProfile.QUALITY_TIME_LAPSE_480P);
mMediaRecorder.setCaptureRate(30f);
mMediaRecorder.setOrientationHint(270);
mMediaRecorder.setOutputFile(...);
mMediaRecorder.setPreviewDisplay(...);
mMediaRecorder.prepare();I record for five seconds (with a CountDownTimer, but that's irrelevant), and this is the file that gets generated :
$ ffmpeg -i ~/CaptureRate30fps.mp4
...
Seems stream 0 codec frame rate differs from container frame rate: 180000.00 (180000/1) -> 30.00 (30/1)
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/mspitz/CaptureRate30fps.mp4':
Metadata:
major_brand : isom
minor_version : 0
compatible_brands: isom3gp4
creation_time : 2013-06-04 00:52:00
Duration: 00:00:02.59, start: 0.000000, bitrate: 5238 kb/s
Stream #0.0(eng): Video: h264 (Baseline), yuv420p, 720x480, 5235 kb/s, PAR 65536:65536 DAR 3:2, 30 fps, 30 tbr, 90k tbn, 180k tbc
Metadata:
creation_time : 2013-06-04 00:52:00Note that the Duration is just about 3 seconds. The video also plays much faster, as if it were 5 seconds of video crammed into 3.
Now, if I record by preparing my mediaRecorder exactly as above, but subtracting the setCaptureRate(30f) line, I get a file like this :
$ ffmpeg -i ~/NoSetCaptureRate.mp4
...
Seems stream 0 codec frame rate differs from container frame rate: 180000.00 (180000/1) -> 90000.00 (180000/2)
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/mspitz/NoSetCaptureRate.mp4':
Metadata:
major_brand : isom
minor_version : 0
compatible_brands: isom3gp4
creation_time : 2013-06-04 00:50:41
Duration: 00:00:04.87, start: 0.000000, bitrate: 2803 kb/s
Stream #0.0(eng): Video: h264 (Baseline), yuv420p, 720x480, 2801 kb/s, PAR 65536:65536 DAR 3:2, 16.01 fps, 90k tbr, 90k tbn, 180k tbc
Metadata:
creation_time : 2013-06-04 00:50:41Note that the Duration is as expected, about 5 seconds. The video also plays at a normal speed.
I'm using setCaptureRate(30f) because 30 frames per second is the value of my CamcorderProfile's videoFrameRate. On my Galaxy Nexus S2 (4.2.1), omitting setCaptureRate() is fine, but when I tested on a Galaxy Nexus S3 (4.1.1), omitting setCaptureRate() results in the ever-helpful "start failed -22" error when I called
mMediaRecorder.start()
.So, what am I missing ? I thought that the capture rate and the video frame rate were independent, but it's clear that they're not. Is there a way to determine programmatically what I need to set the capture rate at in order to determine that my video plays back at 1x speed ?
-
Split Video Files and Make them individually playable
21 juillet 2013, par rashI am newbie to this python. I split webm video file into chunks, but i couldn't able to make them individually playable using python program.But it plays after I join the chunks to a single file. I know its due to the absence of header file. Please help me with the codes to attach the header file to the parts to make them indiviually playable. Please reply. Thanks alot in advance.
Here is the code :
Client side :
import socket, os
import time
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect(("localhost", 5005))
size = 1024
while True:
fname = "/home/xincoz/test/conn2.webm"
fn = client_socket.recv(1024)
print fn
fp = open(fname,'wb')
while True:
strng = client_socket.recv(int(fn))
print strng
if not strng:
break
fp.write(strng)
fp.close()
print "Data Received successfully"
exit()Server side :
import os,kaa.metadata
import sys,time
import socket
import Image
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(("localhost", 5005))
server_socket.listen(5)
client_socket, address = server_socket.accept()
print "Conencted to - ",address,"\n"
file = '/home/xincoz/Downloads/connect.webm'
a = kaa.metadata.parse(file)
print '\n Maybe, pending work'
file_name = open(file,'rb')
size=str(os.path.getsize(file))
print size
client_socket.send(str(os.path.getsize(file)))
print file_name
strng = file_name.read(os.path.getsize(file))
client_socket.send(strng[0:2000000])
file_name.close()
print str(a)+"Meta Data"
print "Data sent successfully" -
libav ffmpeg codec copy rtp_mpegts streaming with very bad quality
27 décembre 2017, par DinkanI am trying to do codec copy of a stream(testing with file now & later
going to use live stream) with format rtp_mpegts over network & play
using VLC player. Started my proof of concept code with slightly
modified remuxing.c in the examples.I am essentially trying to do is to replicate
./ffmpeg -re -i TEST_VIDEO.ts -acodec copy -vcodec copy -f rtp_mpegts
rtp ://239.245.0.2:5002Streaming is happening, but the quality is terrible.
Looks like many frames are skipped plus streaming is happening really
slow(buffer underflow reported by VLC player)File plays perfectly fine directly on VLC player.
Please help.Stream details.
Input #0, mpegts, from ' TEST_VIDEO.ts':
Duration: 00:10:00.40, start: 41313.400811, bitrate: 2840 kb/s
Program 1
Stream #0:0[0x11]: Video: h264 (High) ([27][0][0][0] / 0x001B),
yuv420p(tv, bt709, top first), 1440x1080 [SAR 4:3 DAR 16:9], 29.97
fps, 59.94 tbr, 90k tbn, 59.94 tbc
Stream #0:1[0x14]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz,
stereo, fltp, 448 kb/s
Output #0, rtp_mpegts, to 'rtp://239.255.0.2:5004':
Metadata:
encoder : Lavf57.83.100
Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B),
yuv420p(tv, bt709, top first), 1440x1080 [SAR 4:3 DAR 16:9], q=2-31,
29.97 fps, 59.94 tbr, 90k tbn, 29.97 tbc
Stream #0:1: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo,
fltp, 448 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 418 fps=5.2 q=-1.0 size= 3346kB time=00:00:08.50
bitrate=3223.5kbits/s speed=0.106xMy complete source code(This is almost same as remuxing.c)
#include <libavutil></libavutil>timestamp.h>
#include <libavformat></libavformat>avformat.h>
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket
*pkt, const char *tag)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s
duration_time:%s stream_index:%d\n",
tag,
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
int main(int argc, char **argv)
{
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
const char *in_filename, *out_filename;
int ret, i;
int stream_index = 0;
int *stream_mapping = NULL;
int stream_mapping_size = 0;
AVRational mux_timebase;
int64_t start_time = 0; //(of->start_time == AV_NOPTS_VALUE) ? 0 :
of->start_time;
int64_t ost_tb_start_time = 0; //av_rescale_q(start_time,
AV_TIME_BASE_Q, ost->mux_timebase);
if (argc < 3) {
printf("usage: %s input output\n"
"API example program to remux a media file with
libavformat and libavcodec.\n"
"The output format is guessed according to the file extension.\n"
"\n", argv[0]);
return 1;
}
in_filename = argv[1];
out_filename = argv[2];
av_register_all();
avcodec_register_all();
avformat_network_init();
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
fprintf(stderr, "Could not open input file '%s'", in_filename);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
fprintf(stderr, "Failed to retrieve input stream information");
goto end;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
avformat_alloc_output_context2(&ofmt_ctx, NULL, "rtp_mpegts", out_filename);
if (!ofmt_ctx) {
fprintf(stderr, "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
stream_mapping_size = ifmt_ctx->nb_streams;
stream_mapping = av_mallocz_array(stream_mapping_size,
sizeof(*stream_mapping));
if (!stream_mapping) {
ret = AVERROR(ENOMEM);
goto end;
}
ofmt = ofmt_ctx->oformat;
for (i = 0; i < ifmt_ctx->nb_streams; i++)
{
AVStream *out_stream;
AVStream *in_stream = ifmt_ctx->streams[i];
AVCodecParameters *in_codecpar = in_stream->codecpar;
if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
stream_mapping[i] = -1;
continue;
}
stream_mapping[i] = stream_index++;
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
fprintf(stderr, "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
//out_stream->codecpar->codec_tag = 0;
if (0 == out_stream->codecpar->codec_tag)
{
unsigned int codec_tag_tmp;
if (!out_stream->codecpar->codec_tag ||
av_codec_get_id (ofmt->codec_tag,
in_codecpar->codec_tag) == in_codecpar->codec_id ||
!av_codec_get_tag2(ofmt->codec_tag,
in_codecpar->codec_id, &codec_tag_tmp))
out_stream->codecpar->codec_tag = in_codecpar->codec_tag;
}
//ret = avcodec_parameters_to_context(ost->enc_ctx, ist->st->codecpar);
ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
if (ret < 0) {
fprintf(stderr, "Failed to copy codec parameters\n");
goto end;
}
//out_stream->codecpar->codec_tag = codec_tag;
// copy timebase while removing common factors
printf("bit_rate %lld sample_rate %d frame_size %d\n",
in_codecpar->bit_rate, in_codecpar->sample_rate,
in_codecpar->frame_size);
out_stream->avg_frame_rate = in_stream->avg_frame_rate;
ret = avformat_transfer_internal_stream_timing_info(ofmt,
out_stream, in_stream,
AVFMT_TBCF_AUTO);
if (ret < 0) {
fprintf(stderr,
"avformat_transfer_internal_stream_timing_info failed\n");
goto end;
}
if (out_stream->time_base.num <= 0 || out_stream->time_base.den <= 0)
out_stream->time_base =
av_add_q(av_stream_get_codec_timebase(out_stream), (AVRational){0,
1});
// copy estimated duration as a hint to the muxer
if (out_stream->duration <= 0 && in_stream->duration > 0)
out_stream->duration = av_rescale_q(in_stream->duration,
in_stream->time_base, out_stream->time_base);
// copy disposition
out_stream->disposition = in_stream->disposition;
out_stream->sample_aspect_ratio = in_stream->sample_aspect_ratio;
out_stream->avg_frame_rate = in_stream->avg_frame_rate;
out_stream->r_frame_rate = in_stream->r_frame_rate;
if ( in_codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
{
mux_timebase = in_stream->time_base;
}
if (in_stream->nb_side_data) {
for (i = 0; i < in_stream->nb_side_data; i++) {
const AVPacketSideData *sd_src = &in_stream->side_data[i];
uint8_t *dst_data;
dst_data = av_stream_new_side_data(out_stream,
sd_src->type, sd_src->size);
if (!dst_data)
return AVERROR(ENOMEM);
memcpy(dst_data, sd_src->data, sd_src->size);
}
}
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
start_time = ofmt_ctx->duration;
ost_tb_start_time = av_rescale_q(ofmt_ctx->duration,
AV_TIME_BASE_Q, mux_timebase);
if (!(ofmt->flags & AVFMT_NOFILE))
{
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open output file '%s'", out_filename);
goto end;
}
}
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file\n");
goto end;
}
while (1)
{
AVStream *in_stream, *out_stream;
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
if (pkt.stream_index >= stream_mapping_size ||
stream_mapping[pkt.stream_index] < 0) {
av_packet_unref(&pkt);
continue;
}
pkt.stream_index = stream_mapping[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
//log_packet(ifmt_ctx, &pkt, "in");
//ofmt_ctx->bit_rate = ifmt_ctx->bit_rate;
ofmt_ctx->duration = ifmt_ctx->duration;
/* copy packet */
//pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base,
out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base,
out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
if (pkt.pts != AV_NOPTS_VALUE)
pkt.pts = av_rescale_q(pkt.pts,
in_stream->time_base,mux_timebase) - ost_tb_start_time;
else
pkt.pts = AV_NOPTS_VALUE;
if (pkt.dts == AV_NOPTS_VALUE)
pkt.dts = av_rescale_q(pkt.dts, AV_TIME_BASE_Q, mux_timebase);
else
pkt.dts = av_rescale_q(pkt.dts, in_stream->time_base, mux_timebase);
pkt.dts -= ost_tb_start_time;
pkt.duration = av_rescale_q(pkt.duration,
in_stream->time_base, mux_timebase);
//pkt.duration = av_rescale_q(1,
av_inv_q(out_stream->avg_frame_rate), mux_timebase);
pkt.pos = -1;
//log_packet(ofmt_ctx, &pkt, "out");
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
fprintf(stderr, "Error muxing packet\n");
break;
}
av_packet_unref(&pkt);
}
av_write_trailer(ofmt_ctx);
end:
avformat_close_input(&ifmt_ctx);
/* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
av_freep(&stream_mapping);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}