
Recherche avancée
Autres articles (66)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (7425)
-
Using decibel values, how to detect if the user is talking or silent ?
26 juin 2021, par Nadir AbbasI have used FFMPEG to extract decibel (or rms ? I am not familiar with the units) values of the audio volume from an mp4. I have 20 samples per frame.




How can I use these values (which are negative in almost all frames), to determine if the frame is silent or has audio (music, speech, etc) ?


-
Qt Open Source Product 2 - Vlc Demo [closed]
3 juin 2021, par cool codeⅠ. Preface


The previous work was made by the FFmpeg kernel, and FFmpeg is too powerful for many beginners to understand. There are also a lot of users only need a simple video stream can be played, do not need to be involved in the responsible decoding and transcoding, so VLC came in handy, it directly made FFMPEG deep encapsulation, to provide a friendly interface. There's an MPV that does the same thing, and MPV is even better than VLC in that it's just one library file, and it looks like it's packaged as a static library, unlike VLC, VLC comes with a bunch of dynamic library files and plug-in files.
Of course, the simplicity of VLC is that it only needs a few lines of code to start, so that beginners immediately see the effect is very important, very excited, you can more quickly carry out the next step of coding, experience the fun of coding.


Ⅱ. Code framework


#include "ffmpeg.h"

FFmpegThread::FFmpegThread(QObject *parent) : QThread(parent)
{
 setObjectName("FFmpegThread");
 stopped = false;
 isPlay = false;

 frameFinish = false;
 videoWidth = 0;
 videoHeight = 0;
 oldWidth = 0;
 oldHeight = 0;
 videoStreamIndex = -1;
 audioStreamIndex = -1;

 url = "rtsp://192.168.1.128:554/1";

 buffer = NULL;
 avPacket = NULL;
 avFrame = NULL;
 avFrame2 = NULL;
 avFrame3 = NULL;
 avFormatContext = NULL;
 videoCodec = NULL;
 audioCodec = NULL;
 swsContext = NULL;

 options = NULL;
 videoDecoder = NULL;
 audioDecoder = NULL;

 //Initial registration, only register once in a software
 FFmpegThread::initlib();
}

//Only need to initialize once in a software
void FFmpegThread::initlib()
{
 static QMutex mutex;
 QMutexLocker locker(&mutex);
 static bool isInit = false;
 if (!isInit) {
 //Register all available file formats and decoders in the library
 av_register_all();
 //Register all devices, mainly for local camera playback support
#ifdef ffmpegdevice
 avdevice_register_all();
#endif
 //Initialize the network stream format, which must be executed first when using the network stream
 avformat_network_init();

 isInit = true;
 qDebug() << TIMEMS << "init ffmpeg lib ok" << " version:" << FFMPEG_VERSION;
#if 0
 //Output all supported decoder names
 QStringList listCodeName;
 AVCodec *code = av_codec_next(NULL);
 while (code != NULL) {
 listCodeName << code->name;
 code = code->next;
 }

 qDebug() << TIMEMS << listCodeName;
#endif
 }
}

bool FFmpegThread::init()
{
 //Before opening the code stream, specify various parameters such as: detection time/timeout time/maximum delay, etc.
 //Set the cache size, 1080p can increase the value
 av_dict_set(&options, "buffer_size", "8192000", 0);
 //Open in tcp mode, if open in udp mode, replace tcp with udp
 av_dict_set(&options, "rtsp_transport", "tcp", 0);
 //Set the timeout disconnection time, the unit is microseconds, 3000000 means 3 seconds
 av_dict_set(&options, "stimeout", "3000000", 0);
 //Set the maximum delay, in microseconds, 1000000 means 1 second
 av_dict_set(&options, "max_delay", "1000000", 0);
 //Automatically start the number of threads
 av_dict_set(&options, "threads", "auto", 0);

 //Open video stream
 avFormatContext = avformat_alloc_context();

 int result = avformat_open_input(&avFormatContext, url.toStdString().data(), NULL, &options);
 if (result < 0) {
 qDebug() << TIMEMS << "open input error" << url;
 return false;
 }

 //Release setting parameters
 if (options != NULL) {
 av_dict_free(&options);
 }

 //Get flow information
 result = avformat_find_stream_info(avFormatContext, NULL);
 if (result < 0) {
 qDebug() << TIMEMS << "find stream info error";
 return false;
 }

 //----------At the beginning of the video stream part, make a mark to facilitate the folding of the code----------
 if (1) {
 videoStreamIndex = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, &videoDecoder, 0);
 if (videoStreamIndex < 0) {
 qDebug() << TIMEMS << "find video stream index error";
 return false;
 }

 //Get video stream
 AVStream *videoStream = avFormatContext->streams[videoStreamIndex];

 //Get the video stream decoder, or specify the decoder
 videoCodec = videoStream->codec;
 videoDecoder = avcodec_find_decoder(videoCodec->codec_id);
 //videoDecoder = avcodec_find_decoder_by_name("h264_qsv");
 if (videoDecoder == NULL) {
 qDebug() << TIMEMS << "video decoder not found";
 return false;
 }

 //Set up accelerated decoding
 videoCodec->lowres = videoDecoder->max_lowres;
 videoCodec->flags2 |= AV_CODEC_FLAG2_FAST;

 //Open the video decoder
 result = avcodec_open2(videoCodec, videoDecoder, NULL);
 if (result < 0) {
 qDebug() << TIMEMS << "open video codec error";
 return false;
 }

 //Get the resolution size
 videoWidth = videoStream->codec->width;
 videoHeight = videoStream->codec->height;

 //If the width and height are not obtained, return
 if (videoWidth == 0 || videoHeight == 0) {
 qDebug() << TIMEMS << "find width height error";
 return false;
 }

 QString videoInfo = QString("Video stream info -> index: %1 decode: %2 format: %3 duration: %4 s Resolution: %5*%6")
 .arg(videoStreamIndex).arg(videoDecoder->name).arg(avFormatContext->iformat->name)
 .arg((avFormatContext->duration) / 1000000).arg(videoWidth).arg(videoHeight);
 qDebug() << TIMEMS << videoInfo;
 }
 //----------The video stream part starts----------

 //----------Start the audio stream part, mark it to facilitate the code folding----------
 if (1) {
 //Loop to find audio stream index
 audioStreamIndex = -1;
 for (uint i = 0; i < avFormatContext->nb_streams; i++) {
 if (avFormatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO) {
 audioStreamIndex = i;
 break;
 }
 }

 //Some have no audio stream, so there is no need to return here
 if (audioStreamIndex == -1) {
 qDebug() << TIMEMS << "find audio stream index error";
 } else {
 //Get audio stream
 AVStream *audioStream = avFormatContext->streams[audioStreamIndex];
 audioCodec = audioStream->codec;

 //Get the audio stream decoder, or specify the decoder
 audioDecoder = avcodec_find_decoder(audioCodec->codec_id);
 //audioDecoder = avcodec_find_decoder_by_name("aac");
 if (audioDecoder == NULL) {
 qDebug() << TIMEMS << "audio codec not found";
 return false;
 }

 //Open the audio decoder
 result = avcodec_open2(audioCodec, audioDecoder, NULL);
 if (result < 0) {
 qDebug() << TIMEMS << "open audio codec error";
 return false;
 }

 QString audioInfo = QString("Audio stream information -> index: %1 decode: %2 Bit rate: %3 channel num: %4 sampling: %5")
 .arg(audioStreamIndex).arg(audioDecoder->name).arg(avFormatContext->bit_rate)
 .arg(audioCodec->channels).arg(audioCodec->sample_rate);
 qDebug() << TIMEMS << audioInfo;
 }
 }
 //----------End of audio stream----------

 //Pre-allocated memory
 avPacket = av_packet_alloc();
 avFrame = av_frame_alloc();
 avFrame2 = av_frame_alloc();
 avFrame3 = av_frame_alloc();

 //Compare the width and height of the last file. When changing, you need to reallocate the memory
 if (oldWidth != videoWidth || oldHeight != videoHeight) {
 int byte = avpicture_get_size(AV_PIX_FMT_RGB32, videoWidth, videoHeight);
 buffer = (uint8_t *)av_malloc(byte * sizeof(uint8_t));
 oldWidth = videoWidth;
 oldHeight = videoHeight;
 }

 //Define pixel format
 AVPixelFormat srcFormat = AV_PIX_FMT_YUV420P;
 AVPixelFormat dstFormat = AV_PIX_FMT_RGB32;
 //Get the decoded format through the decoder
 srcFormat = videoCodec->pix_fmt;

 //The SWS_FAST_BILINEAR parameter used by the default fastest decoding may lose part of the picture data, and you can change it to other parameters by yourself
 int flags = SWS_FAST_BILINEAR;

 //Open up a cache to store one frame of data
 //The following two methods are ok, avpicture_fill has been gradually abandoned
 //avpicture_fill((AVPicture *)avFrame3, buffer, dstFormat, videoWidth, videoHeight);
 av_image_fill_arrays(avFrame3->data, avFrame3->linesize, buffer, dstFormat, videoWidth, videoHeight, 1);

 //Image conversion
 swsContext = sws_getContext(videoWidth, videoHeight, srcFormat, videoWidth, videoHeight, dstFormat, flags, NULL, NULL, NULL);

 //Output video information
 //av_dump_format(avFormatContext, 0, url.toStdString().data(), 0);

 //qDebug() << TIMEMS << "init ffmpeg finsh";
 return true;
}

void FFmpegThread::run()
{
 while (!stopped) {
 //Perform initialization based on the flag bit
 if (isPlay) {
 this->init();
 isPlay = false;
 continue;
 }

 if (av_read_frame(avFormatContext, avPacket) >= 0) {
 //Determine whether the current package is video or audio
 int index = avPacket->stream_index;
 if (index == videoStreamIndex) {
 //Decode video stream avcodec_decode_video2 method has been deprecated
#if 0
 avcodec_decode_video2(videoCodec, avFrame2, &frameFinish, avPacket);
#else
 frameFinish = avcodec_send_packet(videoCodec, avPacket);
 if (frameFinish < 0) {
 continue;
 }

 frameFinish = avcodec_receive_frame(videoCodec, avFrame2);
 if (frameFinish < 0) {
 continue;
 }
#endif

 if (frameFinish >= 0) {
 //Turn the data into a picture
 sws_scale(swsContext, (const uint8_t *const *)avFrame2->data, avFrame2->linesize, 0, videoHeight, avFrame3->data, avFrame3->linesize);

 //The following two methods can be used
 //QImage image(avFrame3->data[0], videoWidth, videoHeight, QImage::Format_RGB32);
 QImage image((uchar *)buffer, videoWidth, videoHeight, QImage::Format_RGB32);
 if (!image.isNull()) {
 emit receiveImage(image);
 }

 msleep(1);
 }
 } else if (index == audioStreamIndex) {
 //Decode the audio stream, it will not be processed here, and will be handed over to sdl to play
 }
 }

 av_packet_unref(avPacket);
 av_freep(avPacket);
 msleep(1);
 }

 //Release resources after the thread ends
 free();
 stopped = false;
 isPlay = false;
 qDebug() << TIMEMS << "stop ffmpeg thread";
}

void FFmpegThread::setUrl(const QString &url)
{
 this->url = url;
}

void FFmpegThread::free()
{
 if (swsContext != NULL) {
 sws_freeContext(swsContext);
 swsContext = NULL;
 }

 if (avPacket != NULL) {
 av_packet_unref(avPacket);
 avPacket = NULL;
 }

 if (avFrame != NULL) {
 av_frame_free(&avFrame);
 avFrame = NULL;
 }

 if (avFrame2 != NULL) {
 av_frame_free(&avFrame2);
 avFrame2 = NULL;
 }

 if (avFrame3 != NULL) {
 av_frame_free(&avFrame3);
 avFrame3 = NULL;
 }

 if (videoCodec != NULL) {
 avcodec_close(videoCodec);
 videoCodec = NULL;
 }

 if (audioCodec != NULL) {
 avcodec_close(audioCodec);
 audioCodec = NULL;
 }

 if (avFormatContext != NULL) {
 avformat_close_input(&avFormatContext);
 avFormatContext = NULL;
 }

 av_dict_free(&options);
 //qDebug() << TIMEMS << "close ffmpeg ok";
}

void FFmpegThread::play()
{
 //Let the thread perform initialization through the flag bit
 isPlay = true;
}

void FFmpegThread::pause()
{

}

void FFmpegThread::next()
{

}

void FFmpegThread::stop()
{
 //Stop the thread through the flag
 stopped = true;
}

//Real-time video display form class
FFmpegWidget::FFmpegWidget(QWidget *parent) : QWidget(parent)
{
 thread = new FFmpegThread(this);
 connect(thread, SIGNAL(receiveImage(QImage)), this, SLOT(updateImage(QImage)));
 image = QImage();
}

FFmpegWidget::~FFmpegWidget()
{
 close();
}

void FFmpegWidget::paintEvent(QPaintEvent *)
{
 if (image.isNull()) {
 return;
 }

 //qDebug() << TIMEMS << "paintEvent" << objectName();
 QPainter painter(this);
 painter.drawImage(this->rect(), image);
}

void FFmpegWidget::updateImage(const QImage &image)
{
 //this->image = image.copy();
 this->image = image;
 this->update();
}

void FFmpegWidget::setUrl(const QString &url)
{
 thread->setUrl(url);
}

void FFmpegWidget::open()
{
 //qDebug() << TIMEMS << "open video" << objectName();
 clear();

 thread->play();
 thread->start();
}

void FFmpegWidget::pause()
{
 thread->pause();
}

void FFmpegWidget::next()
{
 thread->next();
}

void FFmpegWidget::close()
{
 //qDebug() << TIMEMS << "close video" << objectName();
 if (thread->isRunning()) {
 thread->stop();
 thread->quit();
 thread->wait(500);
 }

 QTimer::singleShot(1, this, SLOT(clear()));
}

void FFmpegWidget::clear()
{
 image = QImage();
 update();
}




Ⅲ. Renderings




Ⅳ. Open source code download URL


1.download URL for dropbox :


https://www.dropbox.com/sh/n58ucs57pscp25e/AABWBQlg4U3Oz2WF9YOJDrj1a?dl=0


2.download URL for box :


https://app.box.com/s/x48a7ttpk667afqqdk7t1fqok4fmvmyv


-
Developing MobyCAIRO
26 mai 2021, par Multimedia Mike — GeneralI recently published a tool called MobyCAIRO. The ‘CAIRO’ part stands for Computer-Assisted Image ROtation, while the ‘Moby’ prefix refers to its role in helping process artifact image scans to submit to the MobyGames database. The tool is meant to provide an accelerated workflow for rotating and cropping image scans. It works on both Windows and Linux. Hopefully, it can solve similar workflow problems for other people.
As of this writing, MobyCAIRO has not been tested on Mac OS X yet– I expect some issues there that should be easily solvable if someone cares to test it.
The rest of this post describes my motivations and how I arrived at the solution.
Background
I have scanned well in excess of 2100 images for MobyGames and other purposes in the past 16 years or so. The workflow looks like this :
Image workflow
It should be noted that my original workflow featured me manually rotating the artifact on the scanner bed in order to ensure straightness, because I guess I thought that rotate functions in image editing programs constituted dark, unholy magic or something. So my workflow used to be even more arduous :
I can’t believe I had the patience to do this for hundreds of scans
Sometime last year, I was sitting down to perform some more scanning and found myself dreading the oncoming tedium of straightening and cropping the images. This prompted a pivotal question :
Why can’t a computer do this for me ?
After all, I have always been a huge proponent of making computers handle the most tedious, repetitive, mind-numbing, and error-prone tasks. So I did some web searching to find if there were any solutions that dealt with this. I also consulted with some like-minded folks who have to cope with the same tedious workflow.
I came up empty-handed. So I endeavored to develop my own solution.
Problem Statement and Prior Work
I want to develop a workflow that can automatically rotate an image so that it is straight, and also find the most likely crop rectangle, uniformly whitening the area outside of the crop area (in the case of circles).As mentioned, I checked to see if any other programs can handle this, starting with my usual workhorse, Photoshop Elements. But I can’t expect the trimmed down version to do everything. I tried to find out if its big brother could handle the task, but couldn’t find a definitive answer on that. Nor could I find any other tools that seem to take an interest in optimizing this particular workflow.
When I brought this up to some peers, I received some suggestions, including an idea that the venerable GIMP had a feature like this, but I could not find any evidence. Further, I would get responses of “Program XYZ can do image rotation and cropping.” I had to tamp down on the snark to avoid saying “Wow ! An image editor that can perform rotation AND cropping ? What a game-changer !” Rotation and cropping features are table stakes for any halfway competent image editor for the last 25 or so years at least. I am hoping to find or create a program which can lend a bit of programmatic assistance to the task.
Why can’t other programs handle this ? The answer seems fairly obvious : Image editing tools are general tools and I want a highly customized workflow. It’s not reasonable to expect a turnkey solution to do this.
Brainstorming An Approach
I started with the happiest of happy cases— A disc that needed archiving (a marketing/press assets CD-ROM from a video game company, contents described here) which appeared to have some pretty clear straight lines :
My idea was to try to find straight lines in the image and then rotate the image so that the image is parallel to the horizontal based on the longest single straight line detected.
I just needed to figure out how to find a straight line inside of an image. Fortunately, I quickly learned that this is very much a solved problem thanks to something called the Hough transform. As a bonus, I read that this is also the tool I would want to use for finding circles, when I got to that part. The nice thing about knowing the formal algorithm to use is being able to find efficient, optimized libraries which already implement it.
Early Prototype
A little searching for how to perform a Hough transform in Python led me first to scikit. I was able to rapidly produce a prototype that did some basic image processing. However, running the Hough transform directly on the image and rotating according to the longest line segment discovered turned out not to yield expected results.
It also took a very long time to chew on the 3300×3300 raw image– certainly longer than I care to wait for an accelerated workflow concept. The key, however, is that you are apparently not supposed to run the Hough transform on a raw image– you need to compute the edges first, and then attempt to determine which edges are ‘straight’. The recommended algorithm for this step is the Canny edge detector. After applying this, I get the expected rotation :
The algorithm also completes in a few seconds. So this is a good early result and I was feeling pretty confident. But, again– happiest of happy cases. I should also mention at this point that I had originally envisioned a tool that I would simply run against a scanned image and it would automatically/magically make the image straight, followed by a perfect crop.
Along came my MobyGames comrade Foxhack to disabuse me of the hope of ever developing a fully automated tool. Just try and find a usefully long straight line in this :
Darn it, Foxhack…
There are straight edges, to be sure. But my initial brainstorm of rotating according to the longest straight edge looks infeasible. Further, it’s at this point that we start brainstorming that perhaps we could match on ratings badges such as the standard ESRB badges omnipresent on U.S. video games. This gets into feature detection and complicates things.
This Needs To Be Interactive
At this point in the effort, I came to terms with the fact that the solution will need to have some element of interactivity. I will also need to get out of my safe Linux haven and figure out how to develop this on a Windows desktop, something I am not experienced with.I initially dreamed up an impressive beast of a program written in C++ that leverages Windows desktop GUI frameworks, OpenGL for display and real-time rotation, GPU acceleration for image analysis and processing tricks, and some novel input concepts. I thought GPU acceleration would be crucial since I have a fairly good GPU on my main Windows desktop and I hear that these things are pretty good at image processing.
I created a list of prototyping tasks on a Trello board and made a decent amount of headway on prototyping all the various pieces that I would need to tie together in order to make this a reality. But it was ultimately slowgoing when you can only grab an hour or 2 here and there to try to get anything done.
Settling On A Solution
Recently, I was determined to get a set of old shareware discs archived. I ripped the data a year ago but I was blocked on the scanning task because I knew that would also involve tedious straightening and cropping. So I finally got all the scans done, which was reasonably quick. But I was determined to not manually post-process them.This was fairly recent, but I can’t quite recall how I managed to come across the OpenCV library and its Python bindings. OpenCV is an amazing library that provides a significant toolbox for performing image processing tasks. Not only that, it provides “just enough” UI primitives to be able to quickly create a basic GUI for your program, including image display via multiple windows, buttons, and keyboard/mouse input. Furthermore, OpenCV seems to be plenty fast enough to do everything I need in real time, just with (accelerated where appropriate) CPU processing.
So I went to work porting the ideas from the simple standalone Python/scikit tool. I thought of a refinement to the straight line detector– instead of just finding the longest straight edge, it creates a histogram of 360 rotation angles, and builds a list of lines corresponding to each angle. Then it sorts the angles by cumulative line length and allows the user to iterate through this list, which will hopefully provide the most likely straightened angle up front. Further, the tool allows making fine adjustments by 1/10 of an angle via the keyboard, not the mouse. It does all this while highlighting in red the straight line segments that are parallel to the horizontal axis, per the current candidate angle.
The tool draws a light-colored grid over the frame to aid the user in visually verifying the straightness of the image. Further, the program has a mode that allows the user to see the algorithm’s detected edges :
For the cropping phase, the program uses the Hough circle transform in a similar manner, finding the most likely circles (if the image to be processed is supposed to be a circle) and allowing the user to cycle among them while making precise adjustments via the keyboard, again, rather than the mouse.
Running the Hough circle transform is a significantly more intensive operation than the line transform. When I ran it on a full 3300×3300 image, it ran for a long time. I didn’t let it run longer than a minute before forcibly ending the program. Is this approach unworkable ? Not quite– It turns out that the transform is just as effective when shrinking the image to 400×400, and completes in under 2 seconds on my Core i5 CPU.
For rectangular cropping, I just settled on using OpenCV’s built-in region-of-interest (ROI) facility. I tried to intelligently find the best candidate rectangle and allow fine adjustments via the keyboard, but I wasn’t having much success, so I took a path of lesser resistance.
Packaging and Residual Weirdness
I realized that this tool would be more useful to a broader Windows-using base of digital preservationists if they didn’t have to install Python, establish a virtual environment, and install the prerequisite dependencies. Thus, I made the effort to figure out how to wrap the entire thing up into a monolithic Windows EXE binary. It is available from the project’s Github release page (another thing I figured out for the sake of this project !).The binary is pretty heavy, weighing in at a bit over 50 megabytes. You might advise using compression– it IS compressed ! Before I figured out the
--onefile
command for pyinstaller.exe, the generated dist/ subdirectory was 150 MB. Among other things, there’s a 30 MB FORTRAN BLAS library packaged in !Conclusion and Future Directions
Once I got it all working with a simple tkinter UI up front in order to select between circle and rectangle crop modes, I unleashed the tool on 60 or so scans in bulk, using the Windows forfiles command (another learning experience). I didn’t put a clock on the effort, but it felt faster. Of course, I was livid with proudness the whole time because I was using my own tool. I just wish I had thought of it sooner. But, really, with 2100+ scans under my belt, I’m just getting started– I literally have thousands more artifacts to scan for preservation.The tool isn’t perfect, of course. Just tonight, I threw another scan at MobyCAIRO. Just go ahead and try to find straight lines in this specimen :
I eventually had to use the text left and right of center to line up against the grid with the manual keyboard adjustments. Still, I’m impressed by how these computer vision algorithms can see patterns I can’t, highlighting lines I never would have guessed at.
I’m eager to play with OpenCV some more, particularly the video processing functions, perhaps even some GPU-accelerated versions.
The post Developing MobyCAIRO first appeared on Breaking Eggs And Making Omelettes.