
Recherche avancée
Médias (2)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (111)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (14337)
-
dnn/vf_dnn_detect.c : add tensorflow output parse support
6 mai 2021, par Ting Fudnn/vf_dnn_detect.c : add tensorflow output parse support
Testing model is tensorflow offical model in github repo, please refer
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
to download the detect model as you need.
For example, local testing was carried on with 'ssd_mobilenet_v2_coco_2018_03_29.tar.gz', and
used one image of dog in
https://github.com/tensorflow/models/blob/master/research/object_detection/test_images/image1.jpgTesting command is :
./ffmpeg -i image1.jpg -vf dnn_detect=dnn_backend=tensorflow:input=image_tensor:output=\
"num_detections&detection_scores&detection_classes&detection_boxes":model=ssd_mobilenet_v2_coco.pb,\
showinfo -f null -We will see the result similar as below :
[Parsed_showinfo_1 @ 0x33e65f0] side data - detection bounding boxes :
[Parsed_showinfo_1 @ 0x33e65f0] source : ssd_mobilenet_v2_coco.pb
[Parsed_showinfo_1 @ 0x33e65f0] index : 0, region : (382, 60) -> (1005, 593), label : 18, confidence : 9834/10000.
[Parsed_showinfo_1 @ 0x33e65f0] index : 1, region : (12, 8) -> (328, 549), label : 18, confidence : 8555/10000.
[Parsed_showinfo_1 @ 0x33e65f0] index : 2, region : (293, 7) -> (682, 458), label : 1, confidence : 8033/10000.
[Parsed_showinfo_1 @ 0x33e65f0] index : 3, region : (342, 0) -> (690, 325), label : 1, confidence : 5878/10000.There are two boxes of dog with cores 94.05% & 93.45% and two boxes of person with scores 80.33% & 58.78%.
Signed-off-by : Ting Fu <ting.fu@intel.com>
Signed-off-by : Guo, Yejun <yejun.guo@intel.com> -
How do i compress a video file in c# (Xamarin android)
1er août 2016, par stackOverNoI’m currently working on a xamarin.android project, and am attempting to upload a video to an aws server, and then also be able to play it back. The upload is working correctly as far as I can tell.
I’m retrieving the file from the user’s phone, turning it into a byte array, and uploading that. This is the code to upload :
if (isImageAttached || isVideoAttached)
{
//upload the file
byte[] fileInfo = System.IO.File.ReadAllBytes(filePath);
Task<media> task = client.SaveMediaAsync(fileInfo, nameOfFile);
mediaObj = await task;
//other code below is irrelevant to example
}
</media>and SaveMediaAsync is a function I wrote in a PCL :
public async Task<media> SaveMediaAsync(byte[] fileInfo, string fName)
{
Media a = new Media();
var uri = new Uri(RestUrl);
try
{
MultipartFormDataContent form = new MultipartFormDataContent();
form.Add(new StreamContent(new MemoryStream(fileInfo)), "file", fName); //add file
var response = await client.PostAsync(uri, form); //post the form client is an httpclient object
string info = await response.Content.ReadAsStringAsync();
//save info to media object
string[] parts = info.Split('\"');
a.Name = parts[3];
a.Path = parts[7];
a.Size = Int32.Parse(parts[10]);
}
catch(Exception ex)
{
//handle exception
}
return a;
}
</media>After uploading the video like that, I’m able to view it in a browser using the public url. The quality is the same, and there is no issue with lag or load time. However when I try to play back the video using the same public url on my app on an android device, it takes an unbelievably long time to load the video. Even once it is loaded, it plays less than a second of it, and then seems to start loading the video again(the part of the progress bar that shows how much of the video has loaded jumps back to the current position and starts loading again).
VideoView myVideo = FindViewById<videoview>(Resource.Id.TestVideo);
myVideo.SetVideoURI(Android.Net.Uri.Parse(url));
//add media controller
MediaController cont = new MediaController(this);
cont.SetAnchorView(myVideo);
myVideo.SetMediaController(cont);
//start video
myVideo.Start();
</videoview>Now I’m trying to play a 15 second video that is 5.9mb. When I try to play a 5 second video that’s 375kb it plays with no issue. This leads me to believe I need to make the video file smaller before playing it back, but I’m not sure how to do that. I’m trying to allow the user to upload their own videos, so I’ll have all different file formats and sizes.
I’ve seen some people suggesting ffmpeg for a c# library to alter video files, but I’m not quite sure what it is I need to do to the video file. Can anyone fill in the gaps in my knowledge here ?
Thanks for your time, it’s greatly appreciated !
-
Xuggler encoding and muxing
18 décembre 2012, par HeineyBehindsI'm trying to use Xuggler (which I believe uses
ffmpeg
under the hood) to do the following :- Accept a raw MPJPEG video bitstream (from a small TTL serial camera) and encode/transcode it to h.264 ; and
- Accept a raw audio bitsream (from a microphone) and encode it to AAC ; then
- Mux the two (audio and video) bitsreams together into a MPEG-TS container
I've watched/read some of their excellent tutorials, and so far here's what I've got :
// I'll worry about implementing this functionality later, but
// involves querying native device drivers.
byte[] nextMjpeg = getNextMjpegFromSerialPort();
// I'll also worry about implementing this functionality as well;
// I'm simply providing these for thoroughness.
BufferedImage mjpeg = MjpegFactory.newMjpeg(nextMjpeg);
// Specify a h.264 video stream (how?)
String h264Stream = "???";
IMediaWriter writer = ToolFactory.makeWriter(h264Stream);
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264);
writer.encodeVideo(0, mjpeg);For one, I think I'm close here, but it's still not correct ; and I've only gotten this far by reading the video code examples (not the audio - I can't find any good audio examples).
Literally, I'll be getting byte-level access to the raw video and audio feeds coming into my Xuggler implementation. But for the life of me I can't figure out how to get them into an h.264/AAC/MPEG-TS format. Thanks in advance for any help here.