
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (69)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (11686)
-
An introduction to reverse engineering
22 janvier 2011(This blog is still in hibernation, but I needed somewhere to post this)
Reverse engineering is one of those wonderful topics, covering everything from simple "guess how this program works" problem solving, to poking at silicon with scanning electron microscopes. I’m always hugely fascinated by articles that walk through the steps involved in these types of activities, so I thought I’d contribute one back to the world.
In this case, I’m going to be looking at the export bundle format created by the Tandberg Content Server, a device for recording video conferences. The bundle is intended for moving recordings between Tandberg devices, but it’s also the easiest way to get all of the related assets for a recorded conference. Unfortunately, there’s no parser available to take the bundle files (.tcb) and output the component pieces. Well, that just won’t do.
For this type of reverse engineering, I basically want to learn enough about the TCB format to be able to parse out the individual files within it. The only tools I’ll need in this process are a hex editor, a notepad, and a way to convert between hex and decimal (the OS X calculator will do fine if you’re not the type to do it in your head).
Step 1 : Basic Research
After Googling around to see if this was a solved issue, I decided to dive into the format. I brought a sample bundle into my trusty hex editor (in this case Hex Fiend).A few things are immediately obvious. First, we see the first four bytes are the letters TCSB. Another quick visit to Google confirms this header type isn’t found elsewhere, and there’s essentially no discussion of it. Going a few bytes further, we see "contents.xml." And a few bytes after that, we see what looks like plaintext XML. This is a pretty good clue that the TCB file consists of a
. Let’s scan a bit further and see if we can confirm that.
In this segment, we see the end of the XML, and something that could be another filename - "dbtransfer" - followed by what looks like gibberish. That doesn’t help too much. Let’s keep looking.
Great - a .jpg ! Looking a bit further, we see the letters "JFIF," which is recognizable as part of a JPEG header. If you weren’t already familiar with that, a quick google for "jpg hex header" would clear up any confusion. So, we’ve got the basics of the file format down, but we’ll need a little bit more information if we’re going to write a parser.Step 2 : Finding the pattern
We can make an educated guess that a file like this has to provide a few hints to a decoder. We would either expect a table of contents, describing where in the bundle each individual file was located, some sort of stop bit marking the boundary between files, byte offsets describing the locations of files, or a listing of file lengths.There isn’t any sign of a table of contents. Let’s start looking for a stop bit, as that would make writing our parser really easy. Want I’m going to do is pull out all of the data between two prospective files, and I want two sets to compare.
I’ve placed asterisks to flag the bytes corresponding to the filenames, since those are known.1E D1 70 4C 25 06 36 4D 42 E9 65 6A 9F 5D 88 38 0A 00 *64 62 74 72 61 6E 73 66 65 72* 42 06 ED 48 0B 50 0A C4 14 D6 63 42 F2 BF E3 9D 20 29 00 00 00 00 00 00 DE E5 FD
01 0C 00 *63 6F 6E 74 65 6E 74 73 2E 78 6D 6C* 9E 0E FE D3 C9 3A 3A 85 F4 E4 22 FE D0 21 DC D7 53 03 00 00 00 00 00 00
The first line corresponds to the "dbtransfer" entry, the second to the "contents.xml" entry. Let’s trim the first entry to match the second.
38 0A 00 *64 62 74 72 61 6E 73 66 65 72* 42 06 ED 48 0B 50 0A C4 14 D6 63 42 F2 BF E3 9D 20 29 00 00 00 00 00 00
01 0C 00 *63 6F 6E 74 65 6E 74 73 2E 78 6D 6C* 9E 0E FE D3 C9 3A 3A 85 F4 E4 22 FE D0 21 DC D7 53 03 00 00 00 00 00 00
It looks like we’ve got three bytes before the filename, followed by 18 bytes, followed by six bytes of zero. Unfortunately, there’s no obvious pattern of bits which would correspond to a "break" between segments. However, looking at those first three bytes, we see a 0x0A, and a 0x0C, two small values in the same place. 10 and 12. Interesting - the 12 entry corresponds with "contents.xml" and the 10 entry corresponds with "dbtransfer". Could that byte describe the length of the filename ? Let’s look at our much longer JPG entry to be sure.
70 4A 00 *77 77 77 5C 73 6C 69 64 65 73 5C 64 37 30 64 35 34 63 66 2D 32 39 35 62 2D 34 31 34 63 2D 61 38 64 66 2D 32 66 37 32 64 66 33 30 31 31 35 65 5C 74 68 75 6D 62 6E 61 69 6C 73 5C 74 68 75 6D 62 6E 61 69 6C 30 30 2E 6A 70 67*
0x4A - 74, corresponding to a 74 character filename. Looks like we’re in business.
At this point, it’s worth an aside to talk about endianness. I happen to know that the Tandberg Content Server runs Windows on Intel, so I went into this with the assumption that the format was little-endian. However, if you’re not sure, it’s always worth looking at words backwards and forwards, just in case.
So we know how to find our filename. Now how do we find our file data ? Let’s go back to our JPEG. We know that JPEGs start with 0xFFD8FFE0, and a quick trip to Google also tells us that they end with 0xFFD9. We can use that to pull a sample jpeg out of our TCB, save it to disk, and confirm that we’re on the right track.
This is one of those great steps in reverse engineering - concrete proof that you’re on the right track. Everything seems to go quicker from this point on.
So, we know we’ve got a JPEG file in a continuous 2177 byte segment. We know that the format used byte lengths to describe filenames - maybe it also uses byte lengths to describe file lengths. Let’s look for 2177, or 0x8108, near our JPEG.
Well, that’s a good sign. But, it could be coincidental, so at this point we’d want to check a few other files to be sure. In fact, looking further in some file, we find some larger .mp4 files which don’t quite match our guess. It turns out that file length is a 32bit value, not a 16bit value - with our two jpegs, the larger bytes just happened to be zeros.
Step 3 : Writing a parser
"Bbbbbut...", I hear you say ! "You have all these chunks of data you don’t understand !"
True enough, but all I care about is getting the files out, with the proper names. I don’t care about creation dates, file permissions, or any of the other crud that this file format likely contains.
Let’s look at the first two files in this bundle. A little bit of byte counting shows us the pattern that we can follow. We’ll treat the first file as a special case. After that, we seek 16 bytes from the end of file data to find the filename length (2 bytes), then we’re at the filename, then we seek 16 bytes to find the file length (4 bytes) and seek another 4 bytes to find the start of the file data. Rinse, repeat.
I wrote a quick parser in PHP, since the eventual use for this information is part of a larger PHP-based application, but any language with basic raw file handling would work just as well.
tcsParser.txt
This was about the simplest possible type of reverse engineering - we had known data in an unknown format, without any compression or encryption. It only gets harder from here... -
OpenCV and Network Cameras -or- How to spy on the neighbors ?
16 mai 2014, par Alexander
A bit of context ; this program was built originally to work with USB cameras - but because of the setup between where the cameras needs to be and where the computer is it makes more sense to switch to cameras run over a network. Now I’m trying to convert the program to accomplish this, but my efforts thus far have met with poor results. I’ve also asked this same question over on the OpenCV forums. Help me spy on my neighbors ! (This is with their permission, of course !) :D
I’m using :
- OpenCV v2.4.6.0
- C++
- D-Link Cloud Camera 7100 (Installer is DCS-7010L, according to the instructions.)
I am trying to access the DLink camera’s video feed through OpenCV.
I can access the camera through it’s IP address with a browser without any issues. Unfourtunately ; my program is less cooperative. When attempting to access the camera the program gives the OpenCV-generated error :
warning : Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:529)
This error occurs with just about everything I try that doesn’t somehow generate more problems.
For reference - the code in OpenCV’s cap_ffmpeg_impl.hpp around line 529 is as follows :
522 bool CvCapture_FFMPEG::open( const char* _filename )
523 {
524 unsigned i;
525 bool valid = false;
526
527 close();
528
529 #if LIBAVFORMAT_BUILD >= CALC_FFMPEG_VERSION(52, 111, 0)
530 int err = avformat_open_input(&ic, _filename, NULL, NULL);
531 #else
532 int err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
533 #endif
...
616 }...for which I have no idea what I’m looking at. It seems to be looking for the ffmpeg version - but I’ve already installed the latest ffmpeg on that computer, so that shouldn’t be the issue.
This is the edited down version I tried to use as per Sebastian Schmitz’s recommendation :
1 #include <fstream> // File input/output
2 #include <iostream> // cout / cin / etc
3 #include // Windows API stuff
4 #include // More input/output stuff
5 #include <string> // "Strings" of characters strung together to form words and stuff
6 #include <cstring> // "Strings" of characters strung together to form words and stuff
7 #include <streambuf> // For buffering load files
8 #include <array> // Functions for working with arrays
9 #include <opencv2></opencv2>imgproc/imgproc.hpp> // Image Processor
10 #include <opencv2></opencv2>core/core.hpp> // Basic OpenCV structures (cv::Mat, Scalar)
11 #include <opencv2></opencv2>highgui/highgui.hpp> // OpenCV window I/O
12 #include "opencv2/calib3d/calib3d.hpp"
13 #include "opencv2/features2d/features2d.hpp"
14 #include "opencv2/opencv.hpp"
15 #include "resource.h" // Included for linking the .rc file
16 #include // For sleep()
17 #include <chrono> // To get start-time of program.
18 #include <algorithm> // For looking at whole sets.
19
20 #ifdef __BORLANDC__
21 #pragma argsused
22 #endif
23
24 using namespace std; // Standard operations. Needed for most basic functions.
25 using namespace std::chrono; // Chrono operations. Needed getting starting time of program.
26 using namespace cv; // OpenCV operations. Needed for most OpenCV functions.
27
28 string videoFeedAddress = "";
29 VideoCapture videoFeedIP = NULL;
30 Mat clickPointStorage; //Artifact from original program.
31
32 void displayCameraViewTest()
33 {
34 VideoCapture cv_cap_IP;
35 Mat color_img_IP;
36 int capture;
37 IplImage* color_img;
38 cv_cap_IP.open(videoFeedAddress);
39 Sleep(100);
40 if(!cv_cap_IP.isOpened())
41 {
42 cout << "Video Error: Video input will not work.\n";
43 cvDestroyWindow("Camera View");
44 return;
45 }
46 clickPointStorage.create(color_img_IP.rows, color_img_IP.cols, CV_8UC3);
47 clickPointStorage.setTo(Scalar(0, 0, 0));
48 cvNamedWindow("Camera View", 0); // create window
49 IplImage* IplClickPointStorage = new IplImage(clickPointStorage);
50 IplImage* Ipl_IP_Img;
51
52 for(;;)
53 {
54 cv_cap_IP.read(color_img_IP);
55 IplClickPointStorage = new IplImage(clickPointStorage);
56 Ipl_IP_Img = new IplImage(color_img_IP);
57 cvAdd(Ipl_IP_Img, IplClickPointStorage, color_img);
58 cvShowImage("Camera View", color_img); // show frame
59 capture = cvWaitKey(10); // wait 10 ms or for key stroke
60 if(capture == 27 || capture == 13 || capture == 32){break;} // if ESC, Return, or space; close window.
61 }
62 cv_cap_IP.release();
63 delete Ipl_IP_Img;
64 delete IplClickPointStorage;
65 cvDestroyWindow("Camera View");
66 return;
67 }
68
69 int main()
70 {
71 while(1)
72 {
73 cout << "Please Enter Video-Feed Address: ";
74 cin >> videoFeedAddress;
75 if(videoFeedAddress == "exit"){return 0;}
76 cout << "\nvideoFeedAddress: " << videoFeedAddress << endl;
77 displayCameraViewTest();
78 if(cvWaitKey(10) == 27){return 0;}
79 }
80 return 0;
81 }
</algorithm></chrono></array></streambuf></cstring></string></iostream></fstream>Using added ’cout’s I was able to narrow it down to line 38 : "cv_cap_IP.open(videoFeedAddress) ;"
No value I enter for the videoFeedAddress variable seems to get a different result. I found THIS site that lists a number of possible addresses to connect to it. Since there exists no 7100 anywhere in the list & considering that the install is labeled "DCS-7010L" I used the addresses found next to the DCS-7010L listings. When trying to access the camera most of them can be reached through the browser, confirming that they reach the camera - but they don’t seem to affect the outcome when I use them in the videoFeedAddress variable.
I’ve tried many of them both with and without username:password, the port number (554), and variations on ?.mjpg (the format) at the end.
I searched around and came across a number of different "possible" answers - but none of them seem to work for me. Some of them did give me the idea for including the above username:password, etc stuff, but it doesn’t seem to be making a difference. Of course, the number of possible combinations is certainly rather large- so I certainly have not tried all of them (more direction here would be appreciated). Here are some of the links I found :
- This is one of the first configurations my code was in. No dice.
- This one is talking about files - not cameras. It also mentions codecs - but I wouldn’t be able to watch it in a web browser if that were the problem, right ? (Correct me if I’m wrong here...)
- This one has the wrong error code/points to the wrong line of code !
- This one mentions compiling OpenCV with ffmpeg support - but I believe 2.4.6.0 already comes with that all set and ready ! Otherwise it’s not that different from what I’ve already tried.
- Now THIS one appears to be very similar to what I have, but the only proposed solution doesn’t really help as I had already located a list of connections. I do not believe this is a duplicate, because as per THIS meta discussion I had a lot more information and so didn’t feel comfortable taking over someone else’s question - especially if I end up needing to add even more information later.
Thank you for reading this far. I realize that I am asking a somewhat specific question - although I would appreciate any advice you can think of regarding OpenCV & network cameras or even related topics.
TLDR : Network Camera and OpenCV are not cooperating. I’m unsure if
it’s the address I’m using to direct the program to the camera or the
command I’m using - but no adjustment I make seems to improve the
result beyond what I’ve already done ! Now my neighbors will go unwatched ! -
My python script using ffmpeg captures video content, but the captured content freezes in the middle and jumps frames
11 novembre 2022, par Supriyo MitraI am new to ffmpeg and I am trying to use it through a python script. The python functions that captures the video content is given below. The problem I am facing is that the captured content freezes at (uneven) intervals and skips a few frames every time it happens.


` def capturelivestream(self, argslist):
 streamurl, outnum, feedid, outfilename = argslist[0], argslist[1], argslist[2], argslist[3]
 try:
 info = ffmpeg.probe(streamurl, select_streams='a')
 streams = info.get('streams', [])
 except:
 streams = []
 if len(streams) == 0:
 print('There are no streams available')
 stream = {}
 else:
 stream = streams[0]
 for stream in streams:
 if stream.get('codec_type') != 'audio':
 continue
 else:
 break
 if 'channels' in stream.keys():
 channels = stream['channels']
 samplerate = float(stream['sample_rate'])
 else:
 channels = None
 samplerate = 44100
 process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)
 fpath = os.path.dirname(outfilename)
 fnamefext = os.path.basename(outfilename)
 fname = fnamefext.split(".")[0]
 read_size = 320 * 180 * 3 # This is width * height * 3
 lastcaptured = time.time()
 maxtries = 12
 ntries = 0
 while True:
 if process:
 inbytes = process.stdout.read(read_size)
 if inbytes is not None and inbytes.__len__() > 0:
 try:
 frame = (np.frombuffer(inbytes, np.uint8).reshape([180, 320, 3]))
 except:
 print("Failed to reshape frame: %s"%sys.exc_info()[1].__str__())
 continue # This could be an issue if there is a continuous supply of frames that cannot be reshaped
 self.processq.put([outnum, frame])
 lastcaptured = time.time()
 ntries = 0
 else:
 if self.DEBUG:
 print("Could not read frame for feed ID %s"%feedid)
 t = time.time()
 if t - lastcaptured > 30: # If the frames can't be read for more than 30 seconds...
 print("Reopening feed identified by feed ID %s"%feedid)
 process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)
 ntries += 1
 if ntries > maxtries:
 if self.DEBUG:
 print("Stream %s is no longer available."%streamurl)
 # DB statements removed here
 
 break # Break out of infinite loop.
 continue
 
 return None`




The function that captures the frames is as follows :



` def framewriter(self, outlist):
 isempty = False
 endofrun = False
 while True:
 frame = None
 try:
 args = self.processq.get()
 except: # Sometimes, the program crashes at this point due to lack of memory...
 print("Error in framewriter while reading from queue: %s"%sys.exc_info()[1].__str__())
 continue
 outnum = args[0]
 frame = args[1]
 if outlist.__len__() > outnum:
 out = outlist[outnum]
 else:
 if self.DEBUG == 2:
 print("Could not get writer %s"%outnum)
 continue
 if frame is not None and out is not None:
 out.write(frame)
 isempty = False
 endofrun = False
 else:
 if self.processq.empty() and not isempty:
 isempty = True
 elif self.processq.empty() and isempty: # processq queue is empty now and was empty last time
 print("processq is empty")
 endofrun = True
 elif endofrun and isempty:
 print("Could not find any frames to process. Quitting")
 break
 print("Done writing feeds. Quitting.")
 return None`



The scenario is as follows : There are multiple video streams from a certain website at any time during the day, and the program containing these functions has to capture them as they get streamed. The memory available to this program is 6GB and there could be upto 3 streams running at any instant. Given below is the relevant main section of the script that uses the functions given above.






`itftennis = VideoBot(siteurl)
outlist = []
t = Thread(target=itftennis.framewriter, args=(outlist,))
t.daemon = True
t.start()
tp = Thread(target=handleprocesstermination, args=())
tp.daemon = True
tp.start()
# Create a database connection and as associated cursor object. We will handle database operations from main thread only.
# DB statements removed from here...
feedidlist = []
vidsdict = {}
streampattern = re.compile("\?vid=(\d+)$")
while True:
 streampageurls = itftennis.checkforlivestream()
 if itftennis.DEBUG:
 print("Checking for new urls...")
 print(streampageurls.__len__())
 if streampageurls.__len__() > 0:
 argslist = []
 newurlscount = 0
 for streampageurl in streampageurls:
 newstream = False
 sps = re.search(streampattern, streampageurl)
 if sps:
 streamnum = sps.groups()[0]
 if streamnum not in vidsdict.keys(): # Check if this stream has already been processed.
 vidsdict[streamnum] = 1
 newstream = True
 else:
 continue
 else:
 continue
 print("Detected new live stream... Getting it.")
 streamurl = itftennis.getstreamurlfrompage(streampageurl)
 print("Adding %s to list..."%streamurl)
 if streamurl is not None:
 # Now, get feed metadata...
 metadata = itftennis.getfeedmetadata(streampageurl)
 if metadata is None:
 continue
 # lines to get matchescounter omitted here...
 if matchescounter >= itftennis.__class__.MAX_CONCURRENT_MATCHES:
 break
 if newstream is True:
 newurlscount += 1
 outfilename = time.strftime("./videodump/" + "%Y%m%d%H%M%S",time.localtime())+".avi"
 out = open(outfilename, "wb")
 outlist.append(out) # Save it in the list and take down the number for usage in framewriter
 outnum = outlist.__len__() - 1
 # Save metadata in DB
 # lines omitted here....
 argslist.append([streamurl, outnum, feedid, outfilename]) 
 else:
 print("Couldn't get the stream url from page")
 if newurlscount > 0:
 for args in argslist:
 try:
 p = Process(target=itftennis.capturelivestream, args=(args,))
 p.start()
 processeslist.append(p)
 if itftennis.DEBUG:
 print("Started process with args %s"%args)
 except:
 print("Could not start process due to error: %s"%sys.exc_info()[1].__str__())
 print("Created processes, continuing now...")
 continue
 time.sleep(itftennis.livestreamcheckinterval)
t.join()
tp.join()
for out in outlist:
 out.close()`







Please accept my apologies for swamping with this amount of code. I wanted to provide maximum context to my problem. I have removed the absolutely irrelevant DB statements, but apart from that this is what the code looks like.


If you need to know anything else about the code, please let me know. What I would really like to know is if I am using the ffmpeg streams capturing statements correctly. The stream contains both video and audio components and I need to capture both. Hence I am making the following call :


process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)



Is this how it is supposed to be done ? More importantly, why do I keep getting the freezes in the output video. I have monitored the streams manually, and they are quite consistent. Frame losses do not happen when I view them on the website (at least it is not obviously noticeable). Also, I have run 'top' command on the host running the program. The CPU usage sometimes go over 100% (which, I came to understand from some answers on SO, is to be expected when running ffmpeg) but the memory usage usually remain below 30%. So what is the issue here. What do I need to do in order to fix this problem (other than learn more about how ffmpeg works).


Thanks


I have tried using various ffmpeg options (while trying to find similar issues that others encountered). I also tried running ffmpeg from command line for a limited period of time (11 mins), using the same options as used in the python code, and the captured content came out quite well. No freezes. No jumps in frames. But I need to use it in an automated way and there would be multiple streams at any time. Also, when I try playing the captured content using ffplay, I sometimes get the message "co located POCs unavailable" when these freezes happen. What does it mean ?