
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (93)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (12437)
-
Qt Open Source Product 2 - Vlc Demo [closed]
3 juin 2021, par cool codeⅠ. Preface


The previous work was made by the FFmpeg kernel, and FFmpeg is too powerful for many beginners to understand. There are also a lot of users only need a simple video stream can be played, do not need to be involved in the responsible decoding and transcoding, so VLC came in handy, it directly made FFMPEG deep encapsulation, to provide a friendly interface. There's an MPV that does the same thing, and MPV is even better than VLC in that it's just one library file, and it looks like it's packaged as a static library, unlike VLC, VLC comes with a bunch of dynamic library files and plug-in files.
Of course, the simplicity of VLC is that it only needs a few lines of code to start, so that beginners immediately see the effect is very important, very excited, you can more quickly carry out the next step of coding, experience the fun of coding.


Ⅱ. Code framework


#include "ffmpeg.h"

FFmpegThread::FFmpegThread(QObject *parent) : QThread(parent)
{
 setObjectName("FFmpegThread");
 stopped = false;
 isPlay = false;

 frameFinish = false;
 videoWidth = 0;
 videoHeight = 0;
 oldWidth = 0;
 oldHeight = 0;
 videoStreamIndex = -1;
 audioStreamIndex = -1;

 url = "rtsp://192.168.1.128:554/1";

 buffer = NULL;
 avPacket = NULL;
 avFrame = NULL;
 avFrame2 = NULL;
 avFrame3 = NULL;
 avFormatContext = NULL;
 videoCodec = NULL;
 audioCodec = NULL;
 swsContext = NULL;

 options = NULL;
 videoDecoder = NULL;
 audioDecoder = NULL;

 //Initial registration, only register once in a software
 FFmpegThread::initlib();
}

//Only need to initialize once in a software
void FFmpegThread::initlib()
{
 static QMutex mutex;
 QMutexLocker locker(&mutex);
 static bool isInit = false;
 if (!isInit) {
 //Register all available file formats and decoders in the library
 av_register_all();
 //Register all devices, mainly for local camera playback support
#ifdef ffmpegdevice
 avdevice_register_all();
#endif
 //Initialize the network stream format, which must be executed first when using the network stream
 avformat_network_init();

 isInit = true;
 qDebug() << TIMEMS << "init ffmpeg lib ok" << " version:" << FFMPEG_VERSION;
#if 0
 //Output all supported decoder names
 QStringList listCodeName;
 AVCodec *code = av_codec_next(NULL);
 while (code != NULL) {
 listCodeName << code->name;
 code = code->next;
 }

 qDebug() << TIMEMS << listCodeName;
#endif
 }
}

bool FFmpegThread::init()
{
 //Before opening the code stream, specify various parameters such as: detection time/timeout time/maximum delay, etc.
 //Set the cache size, 1080p can increase the value
 av_dict_set(&options, "buffer_size", "8192000", 0);
 //Open in tcp mode, if open in udp mode, replace tcp with udp
 av_dict_set(&options, "rtsp_transport", "tcp", 0);
 //Set the timeout disconnection time, the unit is microseconds, 3000000 means 3 seconds
 av_dict_set(&options, "stimeout", "3000000", 0);
 //Set the maximum delay, in microseconds, 1000000 means 1 second
 av_dict_set(&options, "max_delay", "1000000", 0);
 //Automatically start the number of threads
 av_dict_set(&options, "threads", "auto", 0);

 //Open video stream
 avFormatContext = avformat_alloc_context();

 int result = avformat_open_input(&avFormatContext, url.toStdString().data(), NULL, &options);
 if (result < 0) {
 qDebug() << TIMEMS << "open input error" << url;
 return false;
 }

 //Release setting parameters
 if (options != NULL) {
 av_dict_free(&options);
 }

 //Get flow information
 result = avformat_find_stream_info(avFormatContext, NULL);
 if (result < 0) {
 qDebug() << TIMEMS << "find stream info error";
 return false;
 }

 //----------At the beginning of the video stream part, make a mark to facilitate the folding of the code----------
 if (1) {
 videoStreamIndex = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, &videoDecoder, 0);
 if (videoStreamIndex < 0) {
 qDebug() << TIMEMS << "find video stream index error";
 return false;
 }

 //Get video stream
 AVStream *videoStream = avFormatContext->streams[videoStreamIndex];

 //Get the video stream decoder, or specify the decoder
 videoCodec = videoStream->codec;
 videoDecoder = avcodec_find_decoder(videoCodec->codec_id);
 //videoDecoder = avcodec_find_decoder_by_name("h264_qsv");
 if (videoDecoder == NULL) {
 qDebug() << TIMEMS << "video decoder not found";
 return false;
 }

 //Set up accelerated decoding
 videoCodec->lowres = videoDecoder->max_lowres;
 videoCodec->flags2 |= AV_CODEC_FLAG2_FAST;

 //Open the video decoder
 result = avcodec_open2(videoCodec, videoDecoder, NULL);
 if (result < 0) {
 qDebug() << TIMEMS << "open video codec error";
 return false;
 }

 //Get the resolution size
 videoWidth = videoStream->codec->width;
 videoHeight = videoStream->codec->height;

 //If the width and height are not obtained, return
 if (videoWidth == 0 || videoHeight == 0) {
 qDebug() << TIMEMS << "find width height error";
 return false;
 }

 QString videoInfo = QString("Video stream info -> index: %1 decode: %2 format: %3 duration: %4 s Resolution: %5*%6")
 .arg(videoStreamIndex).arg(videoDecoder->name).arg(avFormatContext->iformat->name)
 .arg((avFormatContext->duration) / 1000000).arg(videoWidth).arg(videoHeight);
 qDebug() << TIMEMS << videoInfo;
 }
 //----------The video stream part starts----------

 //----------Start the audio stream part, mark it to facilitate the code folding----------
 if (1) {
 //Loop to find audio stream index
 audioStreamIndex = -1;
 for (uint i = 0; i < avFormatContext->nb_streams; i++) {
 if (avFormatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO) {
 audioStreamIndex = i;
 break;
 }
 }

 //Some have no audio stream, so there is no need to return here
 if (audioStreamIndex == -1) {
 qDebug() << TIMEMS << "find audio stream index error";
 } else {
 //Get audio stream
 AVStream *audioStream = avFormatContext->streams[audioStreamIndex];
 audioCodec = audioStream->codec;

 //Get the audio stream decoder, or specify the decoder
 audioDecoder = avcodec_find_decoder(audioCodec->codec_id);
 //audioDecoder = avcodec_find_decoder_by_name("aac");
 if (audioDecoder == NULL) {
 qDebug() << TIMEMS << "audio codec not found";
 return false;
 }

 //Open the audio decoder
 result = avcodec_open2(audioCodec, audioDecoder, NULL);
 if (result < 0) {
 qDebug() << TIMEMS << "open audio codec error";
 return false;
 }

 QString audioInfo = QString("Audio stream information -> index: %1 decode: %2 Bit rate: %3 channel num: %4 sampling: %5")
 .arg(audioStreamIndex).arg(audioDecoder->name).arg(avFormatContext->bit_rate)
 .arg(audioCodec->channels).arg(audioCodec->sample_rate);
 qDebug() << TIMEMS << audioInfo;
 }
 }
 //----------End of audio stream----------

 //Pre-allocated memory
 avPacket = av_packet_alloc();
 avFrame = av_frame_alloc();
 avFrame2 = av_frame_alloc();
 avFrame3 = av_frame_alloc();

 //Compare the width and height of the last file. When changing, you need to reallocate the memory
 if (oldWidth != videoWidth || oldHeight != videoHeight) {
 int byte = avpicture_get_size(AV_PIX_FMT_RGB32, videoWidth, videoHeight);
 buffer = (uint8_t *)av_malloc(byte * sizeof(uint8_t));
 oldWidth = videoWidth;
 oldHeight = videoHeight;
 }

 //Define pixel format
 AVPixelFormat srcFormat = AV_PIX_FMT_YUV420P;
 AVPixelFormat dstFormat = AV_PIX_FMT_RGB32;
 //Get the decoded format through the decoder
 srcFormat = videoCodec->pix_fmt;

 //The SWS_FAST_BILINEAR parameter used by the default fastest decoding may lose part of the picture data, and you can change it to other parameters by yourself
 int flags = SWS_FAST_BILINEAR;

 //Open up a cache to store one frame of data
 //The following two methods are ok, avpicture_fill has been gradually abandoned
 //avpicture_fill((AVPicture *)avFrame3, buffer, dstFormat, videoWidth, videoHeight);
 av_image_fill_arrays(avFrame3->data, avFrame3->linesize, buffer, dstFormat, videoWidth, videoHeight, 1);

 //Image conversion
 swsContext = sws_getContext(videoWidth, videoHeight, srcFormat, videoWidth, videoHeight, dstFormat, flags, NULL, NULL, NULL);

 //Output video information
 //av_dump_format(avFormatContext, 0, url.toStdString().data(), 0);

 //qDebug() << TIMEMS << "init ffmpeg finsh";
 return true;
}

void FFmpegThread::run()
{
 while (!stopped) {
 //Perform initialization based on the flag bit
 if (isPlay) {
 this->init();
 isPlay = false;
 continue;
 }

 if (av_read_frame(avFormatContext, avPacket) >= 0) {
 //Determine whether the current package is video or audio
 int index = avPacket->stream_index;
 if (index == videoStreamIndex) {
 //Decode video stream avcodec_decode_video2 method has been deprecated
#if 0
 avcodec_decode_video2(videoCodec, avFrame2, &frameFinish, avPacket);
#else
 frameFinish = avcodec_send_packet(videoCodec, avPacket);
 if (frameFinish < 0) {
 continue;
 }

 frameFinish = avcodec_receive_frame(videoCodec, avFrame2);
 if (frameFinish < 0) {
 continue;
 }
#endif

 if (frameFinish >= 0) {
 //Turn the data into a picture
 sws_scale(swsContext, (const uint8_t *const *)avFrame2->data, avFrame2->linesize, 0, videoHeight, avFrame3->data, avFrame3->linesize);

 //The following two methods can be used
 //QImage image(avFrame3->data[0], videoWidth, videoHeight, QImage::Format_RGB32);
 QImage image((uchar *)buffer, videoWidth, videoHeight, QImage::Format_RGB32);
 if (!image.isNull()) {
 emit receiveImage(image);
 }

 msleep(1);
 }
 } else if (index == audioStreamIndex) {
 //Decode the audio stream, it will not be processed here, and will be handed over to sdl to play
 }
 }

 av_packet_unref(avPacket);
 av_freep(avPacket);
 msleep(1);
 }

 //Release resources after the thread ends
 free();
 stopped = false;
 isPlay = false;
 qDebug() << TIMEMS << "stop ffmpeg thread";
}

void FFmpegThread::setUrl(const QString &url)
{
 this->url = url;
}

void FFmpegThread::free()
{
 if (swsContext != NULL) {
 sws_freeContext(swsContext);
 swsContext = NULL;
 }

 if (avPacket != NULL) {
 av_packet_unref(avPacket);
 avPacket = NULL;
 }

 if (avFrame != NULL) {
 av_frame_free(&avFrame);
 avFrame = NULL;
 }

 if (avFrame2 != NULL) {
 av_frame_free(&avFrame2);
 avFrame2 = NULL;
 }

 if (avFrame3 != NULL) {
 av_frame_free(&avFrame3);
 avFrame3 = NULL;
 }

 if (videoCodec != NULL) {
 avcodec_close(videoCodec);
 videoCodec = NULL;
 }

 if (audioCodec != NULL) {
 avcodec_close(audioCodec);
 audioCodec = NULL;
 }

 if (avFormatContext != NULL) {
 avformat_close_input(&avFormatContext);
 avFormatContext = NULL;
 }

 av_dict_free(&options);
 //qDebug() << TIMEMS << "close ffmpeg ok";
}

void FFmpegThread::play()
{
 //Let the thread perform initialization through the flag bit
 isPlay = true;
}

void FFmpegThread::pause()
{

}

void FFmpegThread::next()
{

}

void FFmpegThread::stop()
{
 //Stop the thread through the flag
 stopped = true;
}

//Real-time video display form class
FFmpegWidget::FFmpegWidget(QWidget *parent) : QWidget(parent)
{
 thread = new FFmpegThread(this);
 connect(thread, SIGNAL(receiveImage(QImage)), this, SLOT(updateImage(QImage)));
 image = QImage();
}

FFmpegWidget::~FFmpegWidget()
{
 close();
}

void FFmpegWidget::paintEvent(QPaintEvent *)
{
 if (image.isNull()) {
 return;
 }

 //qDebug() << TIMEMS << "paintEvent" << objectName();
 QPainter painter(this);
 painter.drawImage(this->rect(), image);
}

void FFmpegWidget::updateImage(const QImage &image)
{
 //this->image = image.copy();
 this->image = image;
 this->update();
}

void FFmpegWidget::setUrl(const QString &url)
{
 thread->setUrl(url);
}

void FFmpegWidget::open()
{
 //qDebug() << TIMEMS << "open video" << objectName();
 clear();

 thread->play();
 thread->start();
}

void FFmpegWidget::pause()
{
 thread->pause();
}

void FFmpegWidget::next()
{
 thread->next();
}

void FFmpegWidget::close()
{
 //qDebug() << TIMEMS << "close video" << objectName();
 if (thread->isRunning()) {
 thread->stop();
 thread->quit();
 thread->wait(500);
 }

 QTimer::singleShot(1, this, SLOT(clear()));
}

void FFmpegWidget::clear()
{
 image = QImage();
 update();
}




Ⅲ. Renderings




Ⅳ. Open source code download URL


1.download URL for dropbox :


https://www.dropbox.com/sh/n58ucs57pscp25e/AABWBQlg4U3Oz2WF9YOJDrj1a?dl=0


2.download URL for box :


https://app.box.com/s/x48a7ttpk667afqqdk7t1fqok4fmvmyv


-
Developing MobyCAIRO
26 mai 2021, par Multimedia Mike — GeneralI recently published a tool called MobyCAIRO. The ‘CAIRO’ part stands for Computer-Assisted Image ROtation, while the ‘Moby’ prefix refers to its role in helping process artifact image scans to submit to the MobyGames database. The tool is meant to provide an accelerated workflow for rotating and cropping image scans. It works on both Windows and Linux. Hopefully, it can solve similar workflow problems for other people.
As of this writing, MobyCAIRO has not been tested on Mac OS X yet– I expect some issues there that should be easily solvable if someone cares to test it.
The rest of this post describes my motivations and how I arrived at the solution.
Background
I have scanned well in excess of 2100 images for MobyGames and other purposes in the past 16 years or so. The workflow looks like this :
Image workflow
It should be noted that my original workflow featured me manually rotating the artifact on the scanner bed in order to ensure straightness, because I guess I thought that rotate functions in image editing programs constituted dark, unholy magic or something. So my workflow used to be even more arduous :
I can’t believe I had the patience to do this for hundreds of scans
Sometime last year, I was sitting down to perform some more scanning and found myself dreading the oncoming tedium of straightening and cropping the images. This prompted a pivotal question :
Why can’t a computer do this for me ?
After all, I have always been a huge proponent of making computers handle the most tedious, repetitive, mind-numbing, and error-prone tasks. So I did some web searching to find if there were any solutions that dealt with this. I also consulted with some like-minded folks who have to cope with the same tedious workflow.
I came up empty-handed. So I endeavored to develop my own solution.
Problem Statement and Prior Work
I want to develop a workflow that can automatically rotate an image so that it is straight, and also find the most likely crop rectangle, uniformly whitening the area outside of the crop area (in the case of circles).As mentioned, I checked to see if any other programs can handle this, starting with my usual workhorse, Photoshop Elements. But I can’t expect the trimmed down version to do everything. I tried to find out if its big brother could handle the task, but couldn’t find a definitive answer on that. Nor could I find any other tools that seem to take an interest in optimizing this particular workflow.
When I brought this up to some peers, I received some suggestions, including an idea that the venerable GIMP had a feature like this, but I could not find any evidence. Further, I would get responses of “Program XYZ can do image rotation and cropping.” I had to tamp down on the snark to avoid saying “Wow ! An image editor that can perform rotation AND cropping ? What a game-changer !” Rotation and cropping features are table stakes for any halfway competent image editor for the last 25 or so years at least. I am hoping to find or create a program which can lend a bit of programmatic assistance to the task.
Why can’t other programs handle this ? The answer seems fairly obvious : Image editing tools are general tools and I want a highly customized workflow. It’s not reasonable to expect a turnkey solution to do this.
Brainstorming An Approach
I started with the happiest of happy cases— A disc that needed archiving (a marketing/press assets CD-ROM from a video game company, contents described here) which appeared to have some pretty clear straight lines :
My idea was to try to find straight lines in the image and then rotate the image so that the image is parallel to the horizontal based on the longest single straight line detected.
I just needed to figure out how to find a straight line inside of an image. Fortunately, I quickly learned that this is very much a solved problem thanks to something called the Hough transform. As a bonus, I read that this is also the tool I would want to use for finding circles, when I got to that part. The nice thing about knowing the formal algorithm to use is being able to find efficient, optimized libraries which already implement it.
Early Prototype
A little searching for how to perform a Hough transform in Python led me first to scikit. I was able to rapidly produce a prototype that did some basic image processing. However, running the Hough transform directly on the image and rotating according to the longest line segment discovered turned out not to yield expected results.
It also took a very long time to chew on the 3300×3300 raw image– certainly longer than I care to wait for an accelerated workflow concept. The key, however, is that you are apparently not supposed to run the Hough transform on a raw image– you need to compute the edges first, and then attempt to determine which edges are ‘straight’. The recommended algorithm for this step is the Canny edge detector. After applying this, I get the expected rotation :
The algorithm also completes in a few seconds. So this is a good early result and I was feeling pretty confident. But, again– happiest of happy cases. I should also mention at this point that I had originally envisioned a tool that I would simply run against a scanned image and it would automatically/magically make the image straight, followed by a perfect crop.
Along came my MobyGames comrade Foxhack to disabuse me of the hope of ever developing a fully automated tool. Just try and find a usefully long straight line in this :
Darn it, Foxhack…
There are straight edges, to be sure. But my initial brainstorm of rotating according to the longest straight edge looks infeasible. Further, it’s at this point that we start brainstorming that perhaps we could match on ratings badges such as the standard ESRB badges omnipresent on U.S. video games. This gets into feature detection and complicates things.
This Needs To Be Interactive
At this point in the effort, I came to terms with the fact that the solution will need to have some element of interactivity. I will also need to get out of my safe Linux haven and figure out how to develop this on a Windows desktop, something I am not experienced with.I initially dreamed up an impressive beast of a program written in C++ that leverages Windows desktop GUI frameworks, OpenGL for display and real-time rotation, GPU acceleration for image analysis and processing tricks, and some novel input concepts. I thought GPU acceleration would be crucial since I have a fairly good GPU on my main Windows desktop and I hear that these things are pretty good at image processing.
I created a list of prototyping tasks on a Trello board and made a decent amount of headway on prototyping all the various pieces that I would need to tie together in order to make this a reality. But it was ultimately slowgoing when you can only grab an hour or 2 here and there to try to get anything done.
Settling On A Solution
Recently, I was determined to get a set of old shareware discs archived. I ripped the data a year ago but I was blocked on the scanning task because I knew that would also involve tedious straightening and cropping. So I finally got all the scans done, which was reasonably quick. But I was determined to not manually post-process them.This was fairly recent, but I can’t quite recall how I managed to come across the OpenCV library and its Python bindings. OpenCV is an amazing library that provides a significant toolbox for performing image processing tasks. Not only that, it provides “just enough” UI primitives to be able to quickly create a basic GUI for your program, including image display via multiple windows, buttons, and keyboard/mouse input. Furthermore, OpenCV seems to be plenty fast enough to do everything I need in real time, just with (accelerated where appropriate) CPU processing.
So I went to work porting the ideas from the simple standalone Python/scikit tool. I thought of a refinement to the straight line detector– instead of just finding the longest straight edge, it creates a histogram of 360 rotation angles, and builds a list of lines corresponding to each angle. Then it sorts the angles by cumulative line length and allows the user to iterate through this list, which will hopefully provide the most likely straightened angle up front. Further, the tool allows making fine adjustments by 1/10 of an angle via the keyboard, not the mouse. It does all this while highlighting in red the straight line segments that are parallel to the horizontal axis, per the current candidate angle.
The tool draws a light-colored grid over the frame to aid the user in visually verifying the straightness of the image. Further, the program has a mode that allows the user to see the algorithm’s detected edges :
For the cropping phase, the program uses the Hough circle transform in a similar manner, finding the most likely circles (if the image to be processed is supposed to be a circle) and allowing the user to cycle among them while making precise adjustments via the keyboard, again, rather than the mouse.
Running the Hough circle transform is a significantly more intensive operation than the line transform. When I ran it on a full 3300×3300 image, it ran for a long time. I didn’t let it run longer than a minute before forcibly ending the program. Is this approach unworkable ? Not quite– It turns out that the transform is just as effective when shrinking the image to 400×400, and completes in under 2 seconds on my Core i5 CPU.
For rectangular cropping, I just settled on using OpenCV’s built-in region-of-interest (ROI) facility. I tried to intelligently find the best candidate rectangle and allow fine adjustments via the keyboard, but I wasn’t having much success, so I took a path of lesser resistance.
Packaging and Residual Weirdness
I realized that this tool would be more useful to a broader Windows-using base of digital preservationists if they didn’t have to install Python, establish a virtual environment, and install the prerequisite dependencies. Thus, I made the effort to figure out how to wrap the entire thing up into a monolithic Windows EXE binary. It is available from the project’s Github release page (another thing I figured out for the sake of this project !).The binary is pretty heavy, weighing in at a bit over 50 megabytes. You might advise using compression– it IS compressed ! Before I figured out the
--onefile
command for pyinstaller.exe, the generated dist/ subdirectory was 150 MB. Among other things, there’s a 30 MB FORTRAN BLAS library packaged in !Conclusion and Future Directions
Once I got it all working with a simple tkinter UI up front in order to select between circle and rectangle crop modes, I unleashed the tool on 60 or so scans in bulk, using the Windows forfiles command (another learning experience). I didn’t put a clock on the effort, but it felt faster. Of course, I was livid with proudness the whole time because I was using my own tool. I just wish I had thought of it sooner. But, really, with 2100+ scans under my belt, I’m just getting started– I literally have thousands more artifacts to scan for preservation.The tool isn’t perfect, of course. Just tonight, I threw another scan at MobyCAIRO. Just go ahead and try to find straight lines in this specimen :
I eventually had to use the text left and right of center to line up against the grid with the manual keyboard adjustments. Still, I’m impressed by how these computer vision algorithms can see patterns I can’t, highlighting lines I never would have guessed at.
I’m eager to play with OpenCV some more, particularly the video processing functions, perhaps even some GPU-accelerated versions.
The post Developing MobyCAIRO first appeared on Breaking Eggs And Making Omelettes.
-
ffmpeg save video H.265 from rtsp but i can't open it
14 juin 2021, par Сергей БрандуковBy mistake on the IP camera in the settings there was a value of the H.265 codec (should be h.264)


# ffmpeg -i rtsp://admin:admin@192.168.100.22:554/main -y -c:v copy /home/ubuntu/Video/8/tst_256.mp4
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
 configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
[hevc @ 0x55e8744be3c0] Invalid NAL unit 0, skipping.
[hevc @ 0x55e8744be3c0] VPS 0 does not exist
[hevc @ 0x55e8744be3c0] Invalid NAL unit 0, skipping.
[hevc @ 0x55e8744be3c0] VPS 0 does not exist
[rtsp @ 0x55e8744ba6c0] max delay reached. need to consume packet
[rtsp @ 0x55e8744ba6c0] RTP: missed 84 packets
Input #0, rtsp, from 'rtsp://admin:admin@192.168.100.22:554/main':
 Metadata:
 title : RTSP/RTP stream from IPNC
 comment : main
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: hevc (Main), yuvj420p(pc, bt709), 2048x1536, 50 tbr, 90k tbn, 90k tbc
 Stream #0:1: Data: none
 Stream #0:2: Data: none
Output #0, mp4, to '/home/ubuntu/Video/8/tst_256.mp4':
 Metadata:
 title : RTSP/RTP stream from IPNC
 comment : main
 encoder : Lavf58.29.100
 Stream #0:0: Video: hevc (Main) (hev1 / 0x31766568), yuvj420p(pc, bt709), 2048x1536, q=2-31, 50 tbr, 90k tbn, 90k tbc
Stream mapping:
 Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0x55e874532440] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 0, current: 0; changing to 1. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 1, current: 0; changing to 2. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 216000, current: 110136; changing to 216001. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 216001, current: 124536; changing to 216002. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 216002, current: 146226; changing to 216003. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 216003, current: 160536; changing to 216004. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 216004, current: 175026; changing to 216005. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 216005, current: 200136; changing to 216006. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 216006, current: 214536; changing to 216007. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55e874532440] Non-monotonous DTS in output stream 0:0; previous: 855816, current: 851365; changing to 855817. This may result in incorrect timestamps in the output file.
frame= 100 fps=6.6 q=-1.0 Lsize= 10247kB time=00:00:18.03 bitrate=4653.6kbits/s speed=1.19x 
video:10245kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.021781%
Exiting normally, received signal 2.



there is a file, the size is not empty, the video player shows a black screen, I also can not make out the video into frames


# ffmpeg -i tst_256.mp4 -q:v 1 -r 1 1/%05d.jpg
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
 configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
[AVBSFContext @ 0x555c46ce3540] No start code is found.
tst_256.mp4: could not find codec parameters
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'tst_256.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2mp41
 title : RTSP/RTP stream from IPNC
 encoder : Lavf58.29.100
 comment : main
 Duration: 00:00:18.04, bitrate: N/A
 Stream #0:0(und): Video: hevc (hev1 / 0x31766568), none, 2048x1536, 4652 kb/s, 5.54 fps, 90k tbn (default)
 Metadata:
 handler_name : VideoHandler
Stream mapping:
 Stream #0:0 -> #0:0 (hevc (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[hevc @ 0x555c46cf6dc0] No start code is found.
[hevc @ 0x555c46cf6dc0] Error splitting the input into NAL units.
[hevc @ 0x555c46d07300] No start code is found.
[hevc @ 0x555c46d07300] Error splitting the input into NAL units.
[hevc @ 0x555c46d17d00] No start code is found.
[hevc @ 0x555c46d17d00] Error splitting the input into NAL units.
[hevc @ 0x555c46d286c0] No start code is found.
[hevc @ 0x555c46d286c0] Error splitting the input into NAL units.
[hevc @ 0x555c46d39000] No start code is found.
[hevc @ 0x555c46d39000] Error splitting the input into NAL units.
[hevc @ 0x555c46d499c0] No start code is found.
[hevc @ 0x555c46d499c0] Error splitting the input into NAL units.
[hevc @ 0x555c46d5a400] No start code is found.
[hevc @ 0x555c46d5a400] Error splitting the input into NAL units.
[hevc @ 0x555c46d6ae40] No start code is found.
...
hevc @ 0x555c46cf6dc0] No start code is found.
[hevc @ 0x555c46cf6dc0] Error splitting the input into NAL units.
Error while decoding stream #0:0: Invalid data found when processing input
 Last message repeated 7 times
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished



Is it possible to somehow extract frames from this video or the video stream was not saved ?