
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (71)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (6176)
-
How to use ffmpeg in JavaScript to decode H.264 frames into RGB frames
17 juin 2020, par noelI'm trying to compile ffmpeg into javascript so that I can decode H.264 video streams using node. The streams are H.264 frames packed into RTP NALUs so any solution has to be able to accept H.264 frames rather than a whole file name. These frames can't be in a container like MP4 or AVI because then the demuxer needs to needs the timestamp of every frame before demuxing can occur, but I'm dealing with a real time stream, no containers.



Streaming H.264 over RTP



Below is the basic code I'm using to listen on a udp socket. Inside the 'message' callback the data packet is an RTP datagram. The data portion of the data gram is an H.264 frame (P-frames and I-frames).



var PORT = 33333;
var HOST = '127.0.0.1';

var dgram = require('dgram');
var server = dgram.createSocket('udp4');

server.on('listening', function () {
 var address = server.address();
 console.log('UDP Server listening on ' + address.address + ":" + address.port);
});

server.on('message', function (message, remote) {
 console.log(remote.address + ':' + remote.port +' - ' + message);
 frame = parse_rtp(message);

 rgb_frame = some_library.decode_h264(frame); // This is what I need.

});

server.bind(PORT, HOST); 




I found the Broadway.js library, but I couldn't get it working and it doesn't handle P-frames which I need. I also found ffmpeg.js, but could get that to work and it needs a whole file not a stream. Likewise, fluent-ffmpeg doesn't appear to support file streams ; all of the examples show a filename being passed to the constructor. So I decided to write my own API.



My current solution attempt



I have been able to compile ffmpeg into one big js file, but I can't use it like that. I want to write an API around ffmpeg and then expose those functions to JS. So it seems to me like I need to do the following :



- 

- Compile ffmpeg components (avcodec, avutil, etc.) into llvm bitcode.
- Write a C wrapper that exposes the decoding functionality and uses EMSCRIPTEN_KEEPALIVE.
- Use emcc to compile the wrapper and link it to the bitcode created in step 1.









I found WASM+ffmpeg, but it's in Chinese and some of the steps aren't clear. In particular there is this step :



emcc web.c process.c ../lib/libavformat.bc ../lib/libavcodec.bc ../lib/libswscale.bc ../lib/libswresample.bc ../lib/libavutil.bc \




:( Where I think I'm stuck



I don't understand how all the ffmpeg components get compiled into separate *.bc files. I followed the emmake commands in that article and I end up with one big .bc file.



2 questions



1. Does anyone know the steps to compile ffmpeg using emscripten so that I can expose some API to javascript ?

 2. Is there a better way (with decent documentation/examples) to decode h264 video streams using node ?

-
Anomalie #4526 (Nouveau) : Retirons les dates des copyrights
15 juillet 2020, par Franck DHello :)
Ok, les dates n’ont plus de valeurs, donc, il faut faire quoi concernant les cartouches ?
Je mets choix 1 :- SPIP, Systeme de publication pour l’internet
* - Copyright (c)
- Arnaud Martin, Antoine Pitrou, Philippe Riviere, Emmanuel Saint-James
* - Ce programme est un logiciel libre distribue sous licence GNU/GPL.
- Pour plus de details voir le fichier COPYING.txt ou l’aide en ligne.
-----
Choix 2 (accents et ©) ?
- SPIP, Système de publication pour l’internet
* - Copyright ©
- Arnaud Martin, Antoine Pitrou, Philippe Riviere, Emmanuel Saint-James
* - Ce programme est un logiciel libre distribue sous licence GNU/GPL.
- Pour plus de détails voir le fichier COPYING.txt ou l’aide en ligne.
Autre chose ????
- SPIP, Systeme de publication pour l’internet
-
How to properly pass an asset FileDescriptor to FFmpeg using JNI in Android
6 janvier 2021, par William SeemannI'm trying to retrieve metadata in Android using FFmpeg, JNI and a Java FileDescriptor and it isn't' working. I know FFmpeg supports the pipe protocol so I'm trying to emmulate : "
cat test.mp3 | ffmpeg i pipe:0
" programmatically. I use the following code to get a FileDescriptor from an asset bundled with the Android application :


FileDescriptor fd = getContext().getAssets().openFd("test.mp3").getFileDescriptor();
setDataSource(fd, 0, 0x7ffffffffffffffL); // native function, shown below




Then, in my native (In C++) code I get the FileDescriptor by calling :



static void wseemann_media_FFmpegMediaMetadataRetriever_setDataSource(JNIEnv *env, jobject thiz, jobject fileDescriptor, jlong offset, jlong length)
{
 //...

 int fd = jniGetFDFromFileDescriptor(env, fileDescriptor); // function contents show below

 //...
}

// function contents
static int jniGetFDFromFileDescriptor(JNIEnv * env, jobject fileDescriptor) {
 jint fd = -1;
 jclass fdClass = env->FindClass("java/io/FileDescriptor");

 if (fdClass != NULL) {
 jfieldID fdClassDescriptorFieldID = env->GetFieldID(fdClass, "descriptor", "I");
 if (fdClassDescriptorFieldID != NULL && fileDescriptor != NULL) {
 fd = env->GetIntField(fileDescriptor, fdClassDescriptorFieldID);
 }
 }

 return fd;
}




I then pass the file descriptor pipe # (In C) to FFmpeg :



char path[256] = "";

FILE *file = fdopen(fd, "rb");

if (file && (fseek(file, offset, SEEK_SET) == 0)) {
 char str[20];
 sprintf(str, "pipe:%d", fd);
 strcat(path, str);
}

State *state = av_mallocz(sizeof(State));
state->pFormatCtx = NULL;

if (avformat_open_input(&state->pFormatCtx, path, NULL, &options) != 0) { // Note: path is in the format "pipe:<the fd="fd">"
 printf("Metadata could not be retrieved\n");
 *ps = NULL;
 return FAILURE;
}

if (avformat_find_stream_info(state->pFormatCtx, NULL) < 0) {
 printf("Metadata could not be retrieved\n");
 avformat_close_input(&state->pFormatCtx);
 *ps = NULL;
 return FAILURE;
}

// Find the first audio and video stream
for (i = 0; i < state->pFormatCtx->nb_streams; i++) {
 if (state->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO && video_index < 0) {
 video_index = i;
 }

 if (state->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO && audio_index < 0) {
 audio_index = i;
 }

 set_codec(state->pFormatCtx, i);
}

if (audio_index >= 0) {
 stream_component_open(state, audio_index);
}

if (video_index >= 0) {
 stream_component_open(state, video_index);
}

printf("Found metadata\n");
AVDictionaryEntry *tag = NULL;
while ((tag = av_dict_get(state->pFormatCtx->metadata, "", tag, AV_DICT_IGNORE_SUFFIX))) {
 printf("Key %s: \n", tag->key);
 printf("Value %s: \n", tag->value);
}

*ps = state;
return SUCCESS;
</the>



My issue is
avformat_open_input
doesn't fail but it also doesn't let me retrieve any metadata or frames, The same code works if I use a regular file URI (e.g file ://sdcard/test.mp3) as the path. What am I doing wrong ? Thanks in advance.


Note : if you would like to look at all of the code I'm trying to solve the issue in order to provide this functionality for my library : FFmpegMediaMetadataRetriever.