
Recherche avancée
Autres articles (104)
-
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs. -
Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur
8 février 2011, parLa visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
Configuration de la boite multimédia
Dès (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (13290)
-
FFmpeg decoding subtitle (delphi) decoding resutl > 0 but values are zero/null
27 janvier 2020, par cobanwhile True do begin
if vst.sub_q.size = 0
then Continue;
ret := mpl.packet_queue_get(vst.sub_q, pkt, 0);
if(pkt.data = mpl.flush_pkt.data) then
begin
avcodec_flush_buffers(vst.sub_st.codec);
continue;
end;
ret := avcodec_decode_subtitle2(vst.sub_codec_ctx, @subt, @got_sub, @pkt);
if ret >= 0
then begin
if got_sub = 1
then begin
for i := 0 to subt.num_rects-1
do begin
s := subt.rects[i].text;
if (subt.rects[i].text <> '') or
(subt.rects[i].ass <> '')
then raise Exception.Create(s);
end;
end;
end;
av_packet_unref(pkt);
end;I’m trying to build a video player using ffmpeg + sdl2 with Delphi. I found an example for this which did play video without problems but had problems with audio. Audio is solved by inspecting the ffplay example in c/c++ and translating the audio part to Delphi. I tried to translate complete ffplay but I had strange behavior on video part, it starts playing but after a few seconds audio is going on without problems, while video is getting green screen.
Something is not going well while calling "decoder_decode_frame" function or "video_refresh" in case of video shomehow ?
However, I am, so far, able to decode/play audio and video streams synchronized. At the moment picture/video quality can be better, but this is for later.The problem :
I am using the code above to decode the subtitle stream.
initializing and opening the codec is succeeds "avcodec_open2", queueing packets is succeeds, retrieving packet succeeds at the right moment when it should be displayed."avcodec_decode_subtitle2" function succeeds, result is > 0, got_frame variable is assigned "1", avsubtitle(subt) is being assigned.
I do not get any errors or warnings everyting seems to be correct except the value of avsubtitle.subt format is 1 this seems to be the problem, inspecting ffplay.c
subt start_display_time is 0
subt end_display_time is 0
subt num_rects is 1rects ppAVSubtitleRect is assigned with values zero/nil
rects type_ is subtitle_nonePlaying a *.mkv file, subtitle information from codec is
codec name ’srt’
codec long_name ’SubRip subtitle’
If this type of subtitle is not supported, why can we decode it without any warnings or errors ?
If it is possible to get the subtitle text, how can I achieve this ?AVSubtitle = record
format: uint16_t; (* 0 = graphics *)
start_display_time: uint32_t; (* relative to packet pts, in ms *)
end_display_time: uint32_t; (* relative to packet pts, in ms *)
num_rects: unsigned;
rects: ppAVSubtitleRect;
pts: int64_t; // < Same as packet pts, in AV_TIME_BASE
end;
AVSubtitleType = ( //
SUBTITLE_NONE, SUBTITLE_BITMAP, // < A bitmap, pict will be set
(*
* Plain text, the text field must be set by the decoder and is
* authoritative. ass and pict fields may contain approximations.
*)
SUBTITLE_TEXT,
(*
* Formatted text, the ass field must be set by the decoder and is
* authoritative. pict and text fields may contain approximations.
*)
SUBTITLE_ASS);
AVSubtitleRect = record
x: int; // < top left corner of pict, undefined when pict is not set
y: int; // < top left corner of pict, undefined when pict is not set
w: int; // < width of pict, undefined when pict is not set
h: int; // < height of pict, undefined when pict is not set
nb_colors: int; // < number of colors in pict, undefined when pict is not set
{$IFDEF FF_API_AVPICTURE}
(*
* @deprecated unused
*)
// attribute_deprecated
pict: AVPicture deprecated;
{$ENDIF}
(*
* data+linesize for the bitmap of this subtitle.
* Can be set for text/ass as well once they are rendered.
*)
data: puint8_t_array_4;
linesize: Tint_array_4;
_type: AVSubtitleType;
text: pAnsiChar; // < 0 terminated plain UTF-8 text
(*
* 0 terminated ASS/SSA compatible event line.
* The presentation of this is unaffected by the other values in this
* struct.
*)
ass: pAnsiChar;
flags: int;
end;-----------edit----------------------
SetLength(buf, 0);
for i := 0 to pkt.size-1
do begin
Insert(pkt.data[i], buf, length(buf));
end;
s := TEncoding.UTF8.GetString(buf);It seems that in this case the text is stored in pkt.data, I don’t think this is the right way to get the subtitle text but I wonder why avsubtitle doesn’t get the values. avpacket values seem like, (pts, dts, duration) etc, to be correct.
If I should use such construction to get the subtitle (text)value, how can I detect if it is utf8 encoded or how to determine the encoding ?
There are more questions to be answered like, the position of the subtitle. The bad thing about ffmpeg is it’s almost impossible to get answers.------------------edi2----------------------------------
Ok how I am solving this, is creating in memory a png image with transparency and writing text over it, converting the image to PAVRAME and show at the given pts time + duration. For the positioning is chosen horizontal center for each line and vertically the bottom minus X pixels. Looks like it’s working so far.Next ; the formatting of the text, like bold italic etc. Can anybody give a hint if there is a quick solution(function/method/example), I will try to create one, but a quick solution is welcome. As long as there is no nested formatting is used, it doesn’t sound tricky, but I’ve read that nested formatting can be used.
Should it be that this(formatted text) the reason is why ffmpeg is not assigning any values to avsubtitle ?
-
Android : Recording and Streaming at the same time
29 mars 2016, par Bruno SiqueiraThis is not really a question as much as it is a presentation of all my attempts to solve one of the most challenging functionalities I was faced with.
I use libstreaming library to stream realtime videos to Wowza Server and I need to record it at the same time inside the SD card. I am presenting below all my attempts in order to collect new ideias from the community.
Copy bytes from libstreaming stream to a mp4 file
Development
We created an interception in libstreaming library to copy all the sent bytes to a mp4 file. Libstreaming sends the bytes to Wowza server through a LocalSocket. It users MediaRecorder to access the camera and the mic of the device and sets the output file as the LocalSocket’s input stream. What we do is create a wrapper around this input stream extending from InputStream and create a File output stream inside it. So, every time libstreaming executes a reading over the LocaSocket’s input stream, we copy all the data to the output stream, trying to create a valid MP4 file.
Impediment
When we tried to read the file, it is corrupted. We realized that there are meta information missing from the MP4 file. Specifically the moov atom. We tried to delay the closing of the streaming in order to give time to send this header (this was still a guessing) but it didn’t work. To test the coherence of this data, we used a paid software to try to recover the video, including the header. It became playable, but it was mostly green screen. So this became an not trustable solution. We also tried using "untrunc", a free open source command line program and it couldn’t even start the recovery, since there was no moov atom.
Use ffmpeg compiled to android to access the camera
Development
FFMPEG has a gradle plugin with a java interface to use it inside Android apps. We thought we could access the camera via command line (it is probably in "/dev/video0") and sent it to the media server.
Impediment
We got the error "Permission Denied" when trying to access the camera. The workaround would be to root the device to have access to it, but it make the phones loose their warranty and could brick them.
Use ffmpeg compiled to android combined with MediaRecorder
Development
We tried to make FFMPEG stream a mp4 file being recorded inside the phone via MediaRecorder
Impediment
FFMPEG can not stream MP4 files that are not yet done with the recording.
Use ffmpeg compiled to android with libstreaming
Development
Libstreaming uses LocalServerSocket as the connection between the app and the server, so we thought that we could use ffmpeg connected with LocalServerSocket local address to copy the streaming directly to a local file inside the SD card. Right after the streaming started, we also ran the ffmpeg command to start recording the data to a file. Using ffmpeg, we believed that it would create a MP4 file in the proper way, which means with the moov atom header included.
Impediment
The "address" created is not readable via command line, as a local address inside the phone. So the copy is not possible.
Use OpenCV
Development
OpenCV is an open-source, cross-platform library that provides building blocks for computer vision experiments and applications. It offers high-level interfaces for capturing, processing, and presenting image data. It has their own APIs to connect with the device camera so we started studding it to see if it had the necessary functionalities to stream and record at the same time.
Impediment
We found out that the library is not really defined to do this, but more as image mathematical manipulation. We got even the recommendation to use libstreaming (which we do already).
Use Kickflip SDK
Development
Kickflip is a media streaming service that provides their own SDK for development in android and IOS. It also uses HLS instead of RTMP, which is a newer protocol.
Impediment
Their SDK requires that we create a Activity with camera view that occupies the entire screen of the device, breaking the usability of our app.
Use Adobe Air
Development
We started consulting other developers of app’s already available in the Play Store, that stream to servers already.
Impediment
Getting in touch with those developers, they reassured that would not be possible to record and stream at the same time using this technology. What’s more, we would have to redo the entire app from scratch using Adobe Air.
UPDATE
Webrtc
Development
We started using WebRTC following this great project. We included the signaling server in our NODEJS server and started doing the standard handshake via socket. We were still toggling between local recording and streaming via webrtc.
Impediment
Webrtc does not work in every network configuration. Other than that, the camera acquirement is all native code, which makes a lot harder to try to copy the bytes or intercept it.
-
Android : Recording and Streaming at the same time
23 avril 2020, par Bruno SiqueiraThis is not really a question as much as it is a presentation of all my attempts to solve one of the most challenging functionalities I was faced with.



I use libstreaming library to stream realtime videos to Wowza Server and I need to record it at the same time inside the SD card. I am presenting below all my attempts in order to collect new ideias from the community.



Copy bytes from libstreaming stream to a mp4 file



Development



We created an interception in libstreaming library to copy all the sent bytes to a mp4 file. Libstreaming sends the bytes to Wowza server through a LocalSocket. It users MediaRecorder to access the camera and the mic of the device and sets the output file as the LocalSocket's input stream. What we do is create a wrapper around this input stream extending from InputStream and create a File output stream inside it. So, every time libstreaming executes a reading over the LocaSocket's input stream, we copy all the data to the output stream, trying to create a valid MP4 file.



Impediment



When we tried to read the file, it is corrupted. We realized that there are meta information missing from the MP4 file. Specifically the moov atom. We tried to delay the closing of the streaming in order to give time to send this header (this was still a guessing) but it didn't work. To test the coherence of this data, we used a paid software to try to recover the video, including the header. It became playable, but it was mostly green screen. So this became an not trustable solution. We also tried using "untrunc", a free open source command line program and it couldn't even start the recovery, since there was no moov atom.



Use ffmpeg compiled to android to access the camera



Development



FFMPEG has a gradle plugin with a java interface to use it inside Android apps. We thought we could access the camera via command line (it is probably in "/dev/video0") and sent it to the media server.



Impediment



We got the error "Permission Denied" when trying to access the camera. The workaround would be to root the device to have access to it, but it make the phones loose their warranty and could brick them.



Use ffmpeg compiled to android combined with MediaRecorder



Development



We tried to make FFMPEG stream a mp4 file being recorded inside the phone via MediaRecorder



Impediment



FFMPEG can not stream MP4 files that are not yet done with the recording.



Use ffmpeg compiled to android with libstreaming



Development



Libstreaming uses LocalServerSocket as the connection between the app and the server, so we thought that we could use ffmpeg connected with LocalServerSocket local address to copy the streaming directly to a local file inside the SD card. Right after the streaming started, we also ran the ffmpeg command to start recording the data to a file. Using ffmpeg, we believed that it would create a MP4 file in the proper way, which means with the moov atom header included.



Impediment



The "address" created is not readable via command line, as a local address inside the phone. So the copy is not possible.



Use OpenCV



Development



OpenCV is an open-source, cross-platform library that provides building blocks for computer vision experiments and applications. It offers high-level interfaces for capturing, processing, and presenting image data. It has their own APIs to connect with the device camera so we started studding it to see if it had the necessary functionalities to stream and record at the same time.



Impediment



We found out that the library is not really defined to do this, but more as image mathematical manipulation. We got even the recommendation to use libstreaming (which we do already).



Use Kickflip SDK



Development



Kickflip is a media streaming service that provides their own SDK for development in android and IOS. It also uses HLS instead of RTMP, which is a newer protocol.



Impediment



Their SDK requires that we create a Activity with camera view that occupies the entire screen of the device, breaking the usability of our app.



Use Adobe Air



Development



We started consulting other developers of app's already available in the Play Store, that stream to servers already.



Impediment



Getting in touch with those developers, they reassured that would not be possible to record and stream at the same time using this technology. What's more, we would have to redo the entire app from scratch using Adobe Air.



UPDATE



Webrtc



Development



We started using WebRTC following this great project. We included the signaling server in our NODEJS server and started doing the standard handshake via socket. We were still toggling between local recording and streaming via webrtc.



Impediment



Webrtc does not work in every network configuration. Other than that, the camera acquirement is all native code, which makes a lot harder to try to copy the bytes or intercept it.