
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (8002)
-
Imageio python converts GIF to MP4 incorrectly
1er juin 2018, par Apurva KotiI am writing a function to speed up/down a given GIF (or .gifv) file, and save the resulting animation as an .mp4 file.
I’m using the python imageio package (and its ffmpeg plugin) to do this - download the raw binary data from the gif, write each frame to an mp4, and set the fps of the mp4 to whatever.
My code is -
def changespeed(vid, mult):
vid = vid.replace('.gifv', '.gif')
data = urllib2.urlopen(vid).read()
reader = imageio.get_reader(data, 'gif')
dur = (float(reader.get_meta_data()['duration']))
oldfps = 1000.0 / (10 if dur == 0 else dur)
writer = imageio.get_writer('output.mp4', fps=(oldfps*mult), quality=8.0)
for frame in reader:
writer.append_data(frame)
writer.close()The problem is, at times the output colors will be heavily corrupted, and there doesn’t seem to be any predictability. This happens with some gifs and doesn’t happen with others. I have tried setting a high quality parameter in the
writer
but this doesn’t help.Here is an example of a problematic GIF -
Input : https://i.imgur.com/xFezNYK.gif
Output : https://giant.gfycat.com/MelodicShimmeringBarb.mp4
I can see this issue locally in
output.mp4
, so the issue isn’t with uploading to Gfycat.Is there anything I can do to avoid this behavior ? Thanks.
-
fftools/ffmpeg : propagate frame durations to packets when encoding
14 avril 2023, par Anton Khirnovfftools/ffmpeg : propagate frame durations to packets when encoding
Remove now-obsolete code setting packet durations pre-muxing for CFR
encoded video.Changes output in the following FATE tests :
* numerous adpcm tests
* ffmpeg-filter_complex_audio
* lavf-asf
* lavf-mkv
* lavf-mkv_attachment
* matroska-encoding-delay
All of these change due to the fact that the output duration is now
the actual input data duration and does not include padding added by
the encoder.* apng-osample : less wrong packet durations are now passed to the muxer.
They are not entirely correct, because the first frame duration should
be 3 rather than 2. This is caused by the vsync code and should be
addressed later, but this change is a step in the right direction.
* tscc2-mov : last output frame has a duration of 11 rather than 1 - this
corresponds to the duration actually returned by the demuxer.
* film-cvid : video frame durations are now 2 rather than 1 - this
corresponds to durations actually returned by the demuxer and matches
the timestamps.
* mpeg2-ticket6677 : durations of some video frames are now 2 rather than
1 - this matches the timestamps.- [DH] fftools/ffmpeg_enc.c
- [DH] fftools/ffmpeg_mux.c
- [DH] tests/ref/acodec/adpcm-ima_wav
- [DH] tests/ref/acodec/adpcm-ima_wav-trellis
- [DH] tests/ref/acodec/adpcm-ms
- [DH] tests/ref/acodec/adpcm-ms-trellis
- [DH] tests/ref/acodec/adpcm-swf
- [DH] tests/ref/acodec/adpcm-swf-trellis
- [DH] tests/ref/acodec/adpcm-swf-wav
- [DH] tests/ref/acodec/adpcm-yamaha
- [DH] tests/ref/acodec/adpcm-yamaha-trellis
- [DH] tests/ref/fate/apng-osample
- [DH] tests/ref/fate/autorotate
- [DH] tests/ref/fate/ffmpeg-filter_complex_audio
- [DH] tests/ref/fate/film-cvid
- [DH] tests/ref/fate/matroska-encoding-delay
- [DH] tests/ref/fate/mpeg2-ticket6677
- [DH] tests/ref/fate/tscc2-mov
- [DH] tests/ref/lavf/asf
- [DH] tests/ref/lavf/mkv
- [DH] tests/ref/lavf/mkv_attachment
-
libav live transcode to SFML SoundStream, garbled and noise
20 juin 2021, par William LohanI'm so close to have this working but playing with the output sample format or codec context doesn't seem to solve and don't know where to go from here.


#include <iostream>
#include <sfml></sfml>Audio.hpp>
#include "MyAudioStream.h"

extern "C"
{
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>avutil.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>audio_fifo.h>
#include <libswresample></libswresample>swresample.h>
}

void setupInput(AVFormatContext *input_format_context, AVCodecContext **input_codec_context, const char *streamURL)
{
 // av_find_input_format("mp3");
 avformat_open_input(&input_format_context, streamURL, NULL, NULL);
 avformat_find_stream_info(input_format_context, NULL);

 AVDictionary *metadata = input_format_context->metadata;
 AVDictionaryEntry *name = av_dict_get(metadata, "icy-name", NULL, 0);
 if (name != NULL)
 {
 std::cout << name->value << std::endl;
 }
 AVDictionaryEntry *title = av_dict_get(metadata, "StreamTitle", NULL, 0);
 if (title != NULL)
 {
 std::cout << title->value << std::endl;
 }

 AVStream *stream = input_format_context->streams[0];
 AVCodecParameters *codec_params = stream->codecpar;
 AVCodec *codec = avcodec_find_decoder(codec_params->codec_id);
 *input_codec_context = avcodec_alloc_context3(codec);

 avcodec_parameters_to_context(*input_codec_context, codec_params);
 avcodec_open2(*input_codec_context, codec, NULL);
}

void setupOutput(AVCodecContext *input_codec_context, AVCodecContext **output_codec_context)
{
 AVCodec *output_codec = avcodec_find_encoder(AV_CODEC_ID_PCM_S16LE); // AV_CODEC_ID_PCM_S16LE ?? AV_CODEC_ID_PCM_S16BE
 *output_codec_context = avcodec_alloc_context3(output_codec);
 (*output_codec_context)->channels = 2;
 (*output_codec_context)->channel_layout = av_get_default_channel_layout(2);
 (*output_codec_context)->sample_rate = input_codec_context->sample_rate;
 (*output_codec_context)->sample_fmt = output_codec->sample_fmts[0]; // AV_SAMPLE_FMT_S16 ??
 avcodec_open2(*output_codec_context, output_codec, NULL);
}

void setupResampler(AVCodecContext *input_codec_context, AVCodecContext *output_codec_context, SwrContext **resample_context)
{
 *resample_context = swr_alloc_set_opts(
 *resample_context,
 output_codec_context->channel_layout,
 output_codec_context->sample_fmt,
 output_codec_context->sample_rate,
 input_codec_context->channel_layout,
 input_codec_context->sample_fmt,
 input_codec_context->sample_rate,
 0, NULL);
 swr_init(*resample_context);
}

MyAudioStream::MyAudioStream()
{
 input_format_context = avformat_alloc_context();
 resample_context = swr_alloc();
}

MyAudioStream::~MyAudioStream()
{
 // clean up
 avformat_close_input(&input_format_context);
 avformat_free_context(input_format_context);
}

void MyAudioStream::load(const char *streamURL)
{

 setupInput(input_format_context, &input_codec_context, streamURL);
 setupOutput(input_codec_context, &output_codec_context);
 setupResampler(input_codec_context, output_codec_context, &resample_context);

 initialize(output_codec_context->channels, output_codec_context->sample_rate);
}

bool MyAudioStream::onGetData(Chunk &data)
{

 // init
 AVFrame *input_frame = av_frame_alloc();
 AVPacket *input_packet = av_packet_alloc();
 input_packet->data = NULL;
 input_packet->size = 0;

 // read
 av_read_frame(input_format_context, input_packet);
 avcodec_send_packet(input_codec_context, input_packet);
 avcodec_receive_frame(input_codec_context, input_frame);

 // convert
 uint8_t *converted_input_samples = (uint8_t *)calloc(output_codec_context->channels, sizeof(*converted_input_samples));
 av_samples_alloc(&converted_input_samples, NULL, output_codec_context->channels, input_frame->nb_samples, output_codec_context->sample_fmt, 0);
 swr_convert(resample_context, &converted_input_samples, input_frame->nb_samples, (const uint8_t **)input_frame->extended_data, input_frame->nb_samples);

 data.sampleCount = input_frame->nb_samples;
 data.samples = (sf::Int16 *)converted_input_samples;

 // av_freep(&converted_input_samples[0]);
 // free(converted_input_samples);
 av_packet_free(&input_packet);
 av_frame_free(&input_frame);

 return true;
}

void MyAudioStream::onSeek(sf::Time timeOffset)
{
 // no op
}

sf::Int64 MyAudioStream::onLoop()
{
 // no loop
 return -1;
}

</iostream>


Called with


#include <iostream>

#include "./MyAudioStream.h"

extern "C"
{
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>avutil.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
}

const char *streamURL = "http://s5radio.ponyvillelive.com:8026/stream.mp3";

int main(int, char **)
{

 MyAudioStream myStream;

 myStream.load(streamURL);

 std::cout << "Hello, world!" << std::endl;

 myStream.play();

 while (myStream.getStatus() == MyAudioStream::Playing)
 {
 sf::sleep(sf::seconds(0.1f));
 }

 return 0;
}
</iostream>