
Recherche avancée
Médias (21)
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Corona Radiata
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Lights in the Sky
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Head Down
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (82)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (11064)
-
How to create a video file webm from chunks by media recorder api using ffmpeg
17 octobre 2020, par Caio NakaiI'm trying to create a webm video file from blobs generated by MediaRecorderAPI in a NodeJS server using FFMPEG. I'm able to create the .webm file but it's not playable, I ran this command
$ ffmpeg.exe -v error -i lel.webm -f null - >error.log 2>&1
to generate an error log, the error log file contains this :



[null @ 000002ce7501de40] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 1 >= 1


[h264 @ 000002ce74a727c0] Invalid NAL unit size (804 > 74).


[h264 @ 000002ce74a727c0] Error splitting the input into NAL units.


Error while decoding stream #0:0 : Invalid data found when processing input




This is my web server code


const app = require("express")();
const http = require("http").createServer(app);
const io = require("socket.io")(http);
const fs = require("fs");
const child_process = require("child_process");

app.get("/", (req, res) => {
 res.sendFile(__dirname + "/index.html");
});

io.on("connection", (socket) => {
 console.log("a user connected");

 const ffmpeg = child_process.spawn("ffmpeg", [
 "-i",
 "-",
 "-vcodec",
 "copy",
 "-f",
 "flv",
 "rtmpUrl.webm",
 ]);

 ffmpeg.on("close", (code, signal) => {
 console.log(
 "FFmpeg child process closed, code " + code + ", signal " + signal
 );
 });

 ffmpeg.stdin.on("error", (e) => {
 console.log("FFmpeg STDIN Error", e);
 });

 ffmpeg.stderr.on("data", (data) => {
 console.log("FFmpeg STDERR:", data.toString());
 });

 socket.on("message", (msg) => {
 console.log("Writing blob! ");
 ffmpeg.stdin.write(msg);
 });

 socket.on("stop", () => {
 console.log("Stop recording..");
 ffmpeg.kill("SIGINT");
 });
});

http.listen(3000, () => {
 console.log("listening on *:3000");
});




And this is my client code, using HTML, JS :




 
 
 
 
 
 <code class="echappe-js"><script src='http://stackoverflow.com/socket.io/socket.io.js'></script>

<script>&#xA; const socket = io();&#xA; let mediaRecorder = null;&#xA; const startRecording = (someStream) => {&#xA; const mediaStream = new MediaStream();&#xA; const videoTrack = someStream.getVideoTracks()[0];&#xA; const audioTrack = someStream.getAudioTracks()[0];&#xA; console.log("Video trac ", videoTrack);&#xA; console.log("audio trac ", audioTrack);&#xA; mediaStream.addTrack(videoTrack);&#xA; mediaStream.addTrack(audioTrack);&#xA;&#xA; const recorderOptions = {&#xA; mimeType: "video/webm;codecs=h264",&#xA; videoBitsPerSecond: 3 * 1024 * 1024,&#xA; };&#xA;&#xA; mediaRecorder = new MediaRecorder(mediaStream, recorderOptions);&#xA; mediaRecorder.start(1000); // 1000 - the number of milliseconds to record into each Blob&#xA; mediaRecorder.ondataavailable = (event) => {&#xA; console.debug("Got blob data:", event.data);&#xA; if (event.data &amp;&amp; event.data.size > 0) {&#xA; socket.emit("message", event.data);&#xA; }&#xA; };&#xA; };&#xA;&#xA; const getVideoStream = async () => {&#xA; try {&#xA; const stream = await navigator.mediaDevices.getUserMedia({&#xA; video: true,&#xA; audio: true,&#xA; });&#xA; startRecording(stream);&#xA; myVideo.srcObject = stream;&#xA; } catch (e) {&#xA; console.error("navigator.getUserMedia error:", e);&#xA; }&#xA; };&#xA;&#xA; const stopRecording = () => {&#xA; mediaRecorder.stop();&#xA; socket.emit("stop");&#xA; };&#xA; </script>

 
hello world






 

<script>&#xA; const myVideo = document.getElementById("myvideo");&#xA; myVideo.muted = true;&#xA; </script>

 




Any help is appreciated !


-
libav live transcode to SFML SoundStream, grabbled and noise
19 juin 2021, par William LohanI'm so close to have this working but playing with the output sample format or codec context doesn't seem to solve and don't know where to go from here.


#include <iostream>
#include <sfml></sfml>Audio.hpp>
#include "MyAudioStream.h"

extern "C"
{
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>avutil.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>audio_fifo.h>
#include <libswresample></libswresample>swresample.h>
}

void setupInput(AVFormatContext *input_format_context, AVCodecContext **input_codec_context, const char *streamURL)
{
 // av_find_input_format("mp3");
 avformat_open_input(&input_format_context, streamURL, NULL, NULL);
 avformat_find_stream_info(input_format_context, NULL);

 AVDictionary *metadata = input_format_context->metadata;
 AVDictionaryEntry *name = av_dict_get(metadata, "icy-name", NULL, 0);
 if (name != NULL)
 {
 std::cout << name->value << std::endl;
 }
 AVDictionaryEntry *title = av_dict_get(metadata, "StreamTitle", NULL, 0);
 if (title != NULL)
 {
 std::cout << title->value << std::endl;
 }

 AVStream *stream = input_format_context->streams[0];
 AVCodecParameters *codec_params = stream->codecpar;
 AVCodec *codec = avcodec_find_decoder(codec_params->codec_id);
 *input_codec_context = avcodec_alloc_context3(codec);

 avcodec_parameters_to_context(*input_codec_context, codec_params);
 avcodec_open2(*input_codec_context, codec, NULL);
}

void setupOutput(AVCodecContext *input_codec_context, AVCodecContext **output_codec_context)
{
 AVCodec *output_codec = avcodec_find_encoder(AV_CODEC_ID_PCM_S16LE); // AV_CODEC_ID_PCM_S16LE ?? AV_CODEC_ID_PCM_S16BE
 *output_codec_context = avcodec_alloc_context3(output_codec);
 (*output_codec_context)->channels = 2;
 (*output_codec_context)->channel_layout = av_get_default_channel_layout(2);
 (*output_codec_context)->sample_rate = input_codec_context->sample_rate;
 (*output_codec_context)->sample_fmt = output_codec->sample_fmts[0]; // AV_SAMPLE_FMT_S16 ??
 avcodec_open2(*output_codec_context, output_codec, NULL);
}

void setupResampler(AVCodecContext *input_codec_context, AVCodecContext *output_codec_context, SwrContext **resample_context)
{
 *resample_context = swr_alloc_set_opts(
 *resample_context,
 output_codec_context->channel_layout,
 output_codec_context->sample_fmt,
 output_codec_context->sample_rate,
 input_codec_context->channel_layout,
 input_codec_context->sample_fmt,
 input_codec_context->sample_rate,
 0, NULL);
 swr_init(*resample_context);
}

MyAudioStream::MyAudioStream()
{
 input_format_context = avformat_alloc_context();
 resample_context = swr_alloc();
}

MyAudioStream::~MyAudioStream()
{
 // clean up
 avformat_close_input(&input_format_context);
 avformat_free_context(input_format_context);
}

void MyAudioStream::load(const char *streamURL)
{

 setupInput(input_format_context, &input_codec_context, streamURL);
 setupOutput(input_codec_context, &output_codec_context);
 setupResampler(input_codec_context, output_codec_context, &resample_context);

 initialize(output_codec_context->channels, output_codec_context->sample_rate);
}

bool MyAudioStream::onGetData(Chunk &data)
{

 // init
 AVFrame *input_frame = av_frame_alloc();
 AVPacket *input_packet = av_packet_alloc();
 input_packet->data = NULL;
 input_packet->size = 0;

 // read
 av_read_frame(input_format_context, input_packet);
 avcodec_send_packet(input_codec_context, input_packet);
 avcodec_receive_frame(input_codec_context, input_frame);

 // convert
 uint8_t *converted_input_samples = (uint8_t *)calloc(output_codec_context->channels, sizeof(*converted_input_samples));
 av_samples_alloc(&converted_input_samples, NULL, output_codec_context->channels, input_frame->nb_samples, output_codec_context->sample_fmt, 0);
 swr_convert(resample_context, &converted_input_samples, input_frame->nb_samples, (const uint8_t **)input_frame->extended_data, input_frame->nb_samples);

 data.sampleCount = input_frame->nb_samples;
 data.samples = (sf::Int16 *)converted_input_samples;

 // av_freep(&converted_input_samples[0]);
 // free(converted_input_samples);
 av_packet_free(&input_packet);
 av_frame_free(&input_frame);

 return true;
}

void MyAudioStream::onSeek(sf::Time timeOffset)
{
 // no op
}

sf::Int64 MyAudioStream::onLoop()
{
 // no loop
 return -1;
}

</iostream>


Called with


#include <iostream>

#include "./MyAudioStream.h"

extern "C"
{
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>avutil.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
}

const char *streamURL = "http://s5radio.ponyvillelive.com:8026/stream.mp3";

int main(int, char **)
{

 MyAudioStream myStream;

 myStream.load(streamURL);

 std::cout << "Hello, world!" << std::endl;

 myStream.play();

 while (myStream.getStatus() == MyAudioStream::Playing)
 {
 sf::sleep(sf::seconds(0.1f));
 }

 return 0;
}
</iostream>


-
libav live transcode to SFML SoundStream, garbled and noise
20 juin 2021, par William LohanI'm so close to have this working but playing with the output sample format or codec context doesn't seem to solve and don't know where to go from here.


#include <iostream>
#include <sfml></sfml>Audio.hpp>
#include "MyAudioStream.h"

extern "C"
{
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>avutil.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>audio_fifo.h>
#include <libswresample></libswresample>swresample.h>
}

void setupInput(AVFormatContext *input_format_context, AVCodecContext **input_codec_context, const char *streamURL)
{
 // av_find_input_format("mp3");
 avformat_open_input(&input_format_context, streamURL, NULL, NULL);
 avformat_find_stream_info(input_format_context, NULL);

 AVDictionary *metadata = input_format_context->metadata;
 AVDictionaryEntry *name = av_dict_get(metadata, "icy-name", NULL, 0);
 if (name != NULL)
 {
 std::cout << name->value << std::endl;
 }
 AVDictionaryEntry *title = av_dict_get(metadata, "StreamTitle", NULL, 0);
 if (title != NULL)
 {
 std::cout << title->value << std::endl;
 }

 AVStream *stream = input_format_context->streams[0];
 AVCodecParameters *codec_params = stream->codecpar;
 AVCodec *codec = avcodec_find_decoder(codec_params->codec_id);
 *input_codec_context = avcodec_alloc_context3(codec);

 avcodec_parameters_to_context(*input_codec_context, codec_params);
 avcodec_open2(*input_codec_context, codec, NULL);
}

void setupOutput(AVCodecContext *input_codec_context, AVCodecContext **output_codec_context)
{
 AVCodec *output_codec = avcodec_find_encoder(AV_CODEC_ID_PCM_S16LE); // AV_CODEC_ID_PCM_S16LE ?? AV_CODEC_ID_PCM_S16BE
 *output_codec_context = avcodec_alloc_context3(output_codec);
 (*output_codec_context)->channels = 2;
 (*output_codec_context)->channel_layout = av_get_default_channel_layout(2);
 (*output_codec_context)->sample_rate = input_codec_context->sample_rate;
 (*output_codec_context)->sample_fmt = output_codec->sample_fmts[0]; // AV_SAMPLE_FMT_S16 ??
 avcodec_open2(*output_codec_context, output_codec, NULL);
}

void setupResampler(AVCodecContext *input_codec_context, AVCodecContext *output_codec_context, SwrContext **resample_context)
{
 *resample_context = swr_alloc_set_opts(
 *resample_context,
 output_codec_context->channel_layout,
 output_codec_context->sample_fmt,
 output_codec_context->sample_rate,
 input_codec_context->channel_layout,
 input_codec_context->sample_fmt,
 input_codec_context->sample_rate,
 0, NULL);
 swr_init(*resample_context);
}

MyAudioStream::MyAudioStream()
{
 input_format_context = avformat_alloc_context();
 resample_context = swr_alloc();
}

MyAudioStream::~MyAudioStream()
{
 // clean up
 avformat_close_input(&input_format_context);
 avformat_free_context(input_format_context);
}

void MyAudioStream::load(const char *streamURL)
{

 setupInput(input_format_context, &input_codec_context, streamURL);
 setupOutput(input_codec_context, &output_codec_context);
 setupResampler(input_codec_context, output_codec_context, &resample_context);

 initialize(output_codec_context->channels, output_codec_context->sample_rate);
}

bool MyAudioStream::onGetData(Chunk &data)
{

 // init
 AVFrame *input_frame = av_frame_alloc();
 AVPacket *input_packet = av_packet_alloc();
 input_packet->data = NULL;
 input_packet->size = 0;

 // read
 av_read_frame(input_format_context, input_packet);
 avcodec_send_packet(input_codec_context, input_packet);
 avcodec_receive_frame(input_codec_context, input_frame);

 // convert
 uint8_t *converted_input_samples = (uint8_t *)calloc(output_codec_context->channels, sizeof(*converted_input_samples));
 av_samples_alloc(&converted_input_samples, NULL, output_codec_context->channels, input_frame->nb_samples, output_codec_context->sample_fmt, 0);
 swr_convert(resample_context, &converted_input_samples, input_frame->nb_samples, (const uint8_t **)input_frame->extended_data, input_frame->nb_samples);

 data.sampleCount = input_frame->nb_samples;
 data.samples = (sf::Int16 *)converted_input_samples;

 // av_freep(&converted_input_samples[0]);
 // free(converted_input_samples);
 av_packet_free(&input_packet);
 av_frame_free(&input_frame);

 return true;
}

void MyAudioStream::onSeek(sf::Time timeOffset)
{
 // no op
}

sf::Int64 MyAudioStream::onLoop()
{
 // no loop
 return -1;
}

</iostream>


Called with


#include <iostream>

#include "./MyAudioStream.h"

extern "C"
{
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>avutil.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
}

const char *streamURL = "http://s5radio.ponyvillelive.com:8026/stream.mp3";

int main(int, char **)
{

 MyAudioStream myStream;

 myStream.load(streamURL);

 std::cout << "Hello, world!" << std::endl;

 myStream.play();

 while (myStream.getStatus() == MyAudioStream::Playing)
 {
 sf::sleep(sf::seconds(0.1f));
 }

 return 0;
}
</iostream>