
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (103)
-
Changer son thème graphique
22 février 2011, parLe thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
Modifier le thème graphique utilisé
Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
Il suffit ensuite de se rendre dans l’espace de configuration du (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (6560)
-
Capture CMOS video with FPGA, encode and send over Ethernet
23 décembre 2015, par ya_urockI am planning a open source university project for my students based on Zynq Xilinx FPGA that will capture CMOS video, encode it into transport stream and send it over Ethernet to remote PC. Basically I want to design yet another IP camera. I have strong FPGA experience, but lack knowledge regarding encoding and transfering video data. Here is my plan :
-
Connect CMOS camera to FPGA, recieve video frames and save them to external DDR memory, verify using HDMI output to monitor. I have no problems with that.
-
I understand that I have to compress my video stream for example to H.264 format and put into transport stream. Here I have little knowledge and require some hints.
-
After I form transport stream I can send it over network using UDP packets. I have working hardware solution that reads data from FIFO and sends it to remote PC as UDP papckets.
-
And finally I plan to receive and play video using ffmpeg library.
ffplay udp://localhost:5678
My question is basically regarding 2 step. How do I convert pixel frames to transport stream ? My options are :
- Use commercial IP, like
Here I doubt that they are free to use and we don’t have much funds.
-
Use open cores like
- http://sourceforge.net/projects/hardh264/ - here core generates only h264 output, but how to encapsulate it into transport stream ?
- I have searched opencores.org but with no success on this topic
- Maybe somebody knows some good open source relevant FPGA projects ?
-
Develop harware encoder by myself using Vivado HLS (C Language). But here is the problem that I don’t know the algorithm. Maybe I could gig ffmpeg or Cisco openh264 library and find there a function that converts raw pixel frames to H.264 format and then puts it into transport stream ? Any help would be appriciated here also.
Also I am worried about format compatibility of stream I might generate inside FPGA and the one expected at host by ffplay utility. Any help, hints, links and books are appriciated !
-
-
Washed colors with ffmpeg in macOs
18 décembre 2020, par GabrielleI am having problems regarding the colors of a screen recorded video generated by ffmpeg on mac.




I am using the following command to generate the video :


ffmpeg -f avfoundation -video_size 1980x1140 -framerate 30 -capture_cursor 1 -capture_mouse_clicks 1 -i '1' -preset ultrafast -an -qp 0 -crf 22 -pix_fmt yuv420p -y -c:v libx264 -c:a libfaac -vf eq=brightness=0.00:contrast=1:saturation=1 video_test.mkv



I tried to change
qp
to 1 and 2, to change thecrf
to 0 and 2 (let it higher and lower than qp), tried to usepix_fmt
yuv444p and bgr0 and 0rgb, tried to changec:v
to libx64rgb, tried to not usevf
but the result is the same.

The only real difference I've noticed is when I put a really high crf that makes the image very pixelated...


I've tried different numbers for brightness, contrast and saturation but I couldn't find a good combination.


Any idea how to make the video look more as the original ?


Full log :




About the red message in the middle of the image, this is always printed even when I use yuv444p (the red message prints yuv420p)


-
how to deal with live raw h264 stream to send over network
3 novembre 2015, par jinhwanwhat I want to do is that send live camera stream which is encoded by h264 to gstreamer. I already have seen many example which send over network by using rtp and mpeg-ts. But problem is that all those examples assume that the input will be served by fixed file or live stream which is already transcoded in transport portocol like below.
client :
gst-launch-1.0 videotestsrc horizontal-speed=5 ! x264enc tune="zerolatency" threads=1 ! mpegtsmux ! tcpserversink host=192.168.0.211 port=8554server : gst-launch-1.0 tcpclientsrc port=8554 host=192.168.0.211 ! tsdemux ! h264parse ! avdec_h264 ! xvimagesink
But, My camera offer the below interface (written in java, actually work on adnroid). The interface offer just live raw h264 blocks.
mReceivedVideoDataCallBack=newDJIReceivedVideoDataCallBack(){
@Override
public void onResult(byte[] videoBuffer, int size)
{
}I can create tcp session to send those data block. But, how can i make those data which is not packed in transport protocol into format which is understable by gstreamer tcpclient ?
Transcode the original stream in ts format in the camera side can be a solution. But i have no clue to do transcode from non-file and non-transport-format data. I have searched gstreamer and ffmpeg, But I could not derive a way to deal h264 block stream using the supported interface, unitl now.
Or, Are there any way to make gstreamer to directly accept those simple raw h264 block ?