
Recherche avancée
Autres articles (88)
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (9820)
-
"Critical error detected c0000374" when running av_packet_unref or av_frame_unref
15 mai 2021, par Shivang SharmaI am trying to read and decode frames which is happening nicely but when its reaching part of un-referencing frame or packet using
av_packet_unref
andav_frame_unref
it is giving error during second frame or third frame sometimes .

Error (Copied from visual studio output window) :


Critical error detected c0000374
Libav.exe has triggered a breakpoint



Here is some code of reading and decoding which is giving error :


void Decoder::streamNextFrame(int type = 0)
{
 while (av_read_frame(this->fileFormatCtx, this->latestpacket) >= 0) {
 if (this->audioDecoder->activeAudioStream != nullptr) {
 if (this->latestpacket->stream_index == this->audioDecoder->activeAudioStream->index) {
 avcodec_send_packet(this->audioDecoder->activeStreamDecoder, this->latestpacket);
 err = avcodec_receive_frame(this->audioDecoder->activeStreamDecoder, this->decodedFrame);
 if (err == AVERROR(EAGAIN)) {
 av_frame_unref(this->decodedFrame);
 av_packet_unref(this->latestpacket);
 continue;
 }

 {
 int currentIndex = (int)this->audioFrames->size();
 this->audioFrames->resize((int)this->audioFrames->size() + 1);
 int nb = 0;
 this->audioFrames->at(currentIndex).pts = (int)this->decodedFrame->pts;
 if (this->audioDecoder->activeStreamDecoder->sample_fmt != AV_SAMPLE_FMT_S16) {
 nb = 2048 * this->audioDecoder->activeStreamDecoder->channels;
 printf("%i\n", nb);
 this->audioFrames->at(currentIndex).data.resize(nb);
 if (!swr_is_initialized(swr)) {

 swr_alloc_set_opts(swr, this->audioDecoder->activeStreamDecoder->channel_layout, AV_SAMPLE_FMT_S16, this->audioDecoder->activeStreamDecoder->sample_rate, this->audioDecoder->activeStreamDecoder->channel_layout, this->audioDecoder->activeStreamDecoder->sample_fmt, this->audioDecoder->activeStreamDecoder->sample_rate, 0, nullptr);
 swr_init(swr);
 }

 uint8_t* buffer = this->audioFrames->at(currentIndex).data.data();
 swr_convert(swr, &buffer, nb, (const uint8_t**)this->decodedFrame->extended_data, this->decodedFrame->nb_samples);
 }
 else {
 nb = this->decodedFrame->nb_samples * this->audioDecoder->activeStreamDecoder->channels;
 this->audioFrames->at(currentIndex).data = std::vector(*this->decodedFrame->extended_data, *this->decodedFrame->extended_data + (uint8_t)nb);
 }

 this->audioFrames->at(currentIndex).buffersize = nb;
 }

 if (err == AVERROR_EOF) {
 this->audioDecoder->streamEnded = true;
 av_frame_unref(this->decodedFrame);
 av_packet_unref(this->latestpacket);
 break;
 }
 else if (err >= 0) {
 this->audioDecoder->streamEnded = false;
 }

 if (type != 0) {
 av_packet_unref(this->latestpacket);
 av_frame_unref(this->decodedFrame);
 break;
 }
 av_packet_unref(this->latestpacket);
 av_frame_unref(this->decodedFrame);
 }
 }
 else {
 printf("No active audio stream is set\n");
 if(type!=0)
 break;
 }
 }

}



I have removed some of code which was concerning the video and was not giving problem I think.


Some Information about above code :


this->audioFrames is a pointer to vector with following type.
std::vector<audioframeformat>* "AudioFrameFormat" is struct defined as following

struct AudioFrameFormat {
 std::vector data = {};
 int pts = 0;
 int buffersize = 0;
 };


swr is a private class member allocated in constructor
</audioframeformat>


Call Stack looks like :




I am getting from call stack is that I am not taking care of my heap memory.


Can someone please explain where problem is and, why some times it run till third frame and some time till second frame of the audio stream ?


And please tell how can I improve this code.


-
ffmpeg : Render webm from stdin using NodeJS
2 juin 2015, par Vinicius TavaresI’m having an issue trying to dump some jpeg frames created on the fly to ffmpeg and NodeJS in order to create a webm video.
The script attempts to do these things :
- Fork a new ffmpeg process on initialization
- Render a canvas
- Once the data in canvas is updated, grab JPEG data from it.
- Pipe the JPEG data into the ffmpeg stdin.
- ffmpeg takes care of appending it on a webm video file.
- and this goes forever and ffmpeg should never stop
It should be an always growing video to be broadcast live to all connected clients, but the result that I get is just a single frame webm.
Here is the ffmpeg fork
var args = '-f image2pipe -r 15 -vcodec mjpeg -s 160x144 -i - -f webm -r 15 test.webm'.split(' ');
var encoder = spawn('ffmpeg', args);
encoder.stderr.pipe(process.stdout);Here is the canvas update and pipe
theCanvas.on('draw', function () {
var readStream = self.canvas.jpegStream();
readStream.pipe(self.encoder.stdin);
});ffmpeg output
ffmpeg version 1.2.6-7:1.2.6-1~trusty1 Copyright (c) 2000-2014 the FFmpeg developers
built on Apr 26 2014 18:52:58 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
configuration: --arch=amd64 --disable-stripping --enable-avresample --enable-pthreads --enable-runtime-cpudetect --extra-version='7:1.2.6-1~trusty1' --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --enable-bzlib --enable-libdc1394 --enable-libfreetype --enable-frei0r --enable-gnutls --enable-libgsm --enable-libmp3lame --enable-librtmp --enable-libopencv --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-vaapi --enable-vdpau --enable-libvorbis --enable-libvpx --enable-zlib --enable-gpl --enable-postproc --enable-libcdio --enable-x11grab --enable-libx264 --shlibdir=/usr/lib/x86_64-linux-gnu --enable-shared --disable-static
libavutil 52. 18.100 / 52. 18.100
libavcodec 54. 92.100 / 54. 92.100
libavformat 54. 63.104 / 54. 63.104
libavdevice 53. 5.103 / 53. 5.103
libavfilter 3. 42.103 / 3. 42.103
libswscale 2. 2.100 / 2. 2.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 2.100 / 52. 2.100
[image2pipe @ 0xee0740] Estimating duration from bitrate, this may be inaccurate
Input #0, image2pipe, from 'pipe:':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: mjpeg, yuvj420p, 160x144 [SAR 1:1 DAR 10:9], 15 tbr, 15 tbn, 15 tbc
[libvpx @ 0xec5d00] v1.3.0
Output #0, webm, to 'test.webm':
Metadata:
encoder : Lavf54.63.104
Stream #0:0: Video: vp8, yuv420p, 160x144 [SAR 1:1 DAR 10:9], q=-1--1, 200 kb/s, 1k tbn, 15 tbc
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg -> libvpx)
pipe:: Input/output error
frame= 1 fps=0.0 q=0.0 Lsize= 12kB time=00:00:00.06 bitrate=1441.1kbits/s
video:11kB audio:0kB subtitle:0 global headers:0kB muxing overhead 4.195804%What can I do ?
Thanks,
Vinicius -
Exceeded GA’s 10M hits data limit, now what ?
1er décembre 2021, par Joselyn Khor