
Recherche avancée
Médias (1)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
Autres articles (100)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (16699)
-
I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound
19 avril 2022, par Siti MaynaSo I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?


Notes :


I've tried to test the skill using my mobile phone also.


I've tried to encode the audio using FFmpeg.


I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter


I don't know how to fix this error.


There is no error message on cloud watch.


Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.


Steps to reproduce :




Build the interaction model






Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :




A :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 



B :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 



C :


ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3





Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds






Use the link and insert it to APLA.json





 {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 }




notes : I change the link sources based on audio that I tried.




the intent on lambda_function.py :




def _load_apl_document(file_path):
 # type: (str) -> Dict[str, Any]
 """Load the apl json document at the path into a dict object."""
 with open(file_path) as f:
 return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
 """Handler for Skill Launch."""
 def can_handle(self, handler_input):
 # type: (HandlerInput) -> bool

 return ask_utils.is_request_type("LaunchRequest")(handler_input)

 def handle(self, handler_input):
 # type: (HandlerInput) -> Response
 logger.info("In LaunchRequestHandler")

 # type: (HandlerInput) -> Response
 speak_output = "Hello World!"
 # .ask("add a reprompt if you want to keep the session open for the user to respond")

 return (
 handler_input.response_builder
 #.speak(speak_output)
 .add_directive(
 RenderDocumentDirective(
 token="pagerToken",
 document=_load_apl_document("APLA.json"),
 datasources={}
 )
 )
 .response
 )





Deploy






Test it






The result of the test on my end :

The response for testing




the JSON response :


{
 "body": {
 "version": "1.0",
 "response": {
 "directives": [
 {
 "type": "Alexa.Presentation.APLA.RenderDocument",
 "token": "pagerToken",
 "document": {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 },
 "datasources": {}
 }
 ],
 "type": "_DEFAULT_RESPONSE"
 },
 "sessionAttributes": {},
 "userAgent": "ask-python/1.16.1 Python/3.7.12"
 }
}





On my cloud Watch :
Cloud Watch




-
How correctly show video with transparency in Qt with OpenCV + FFMpeg
11 avril 2022, par TheEnigmistI'm trying to show a video with transparency in a Qt6 application using OpenCV + FFMPEG.
Actually those are tool versions :


- 

- Win 11
- Qt 6.3.0
- OpenCV 4.5.5 (built with CMake)
- FFMPEG 2022-04-03-git-1291568c98-full_build-www.gyan.dev










I've used a base .mov video with transparency as test (link provided below).
First of all I've converted .mov video to .webm video (VP9) and I see in output text that alpha channel remains




ffmpeg -i '.\Retro Bars.mov' -c:v libvpx-vp9 -crf 30 -b:v 0 output.webm




Input #0, mov,mp4,m4a,3gp,3g2,mj2,
 ...
 Stream #0:0[0x1](eng): Video: qtrle (rle / 0x20656C72), argb(progressive),
 ...

Output #0, webm, 
 ...
 Stream #0:0(eng): Video: vp9, yuva420p(tv, progressive),
 ...



But when I show info of output file with ffmpeg it loses alpha channel :




ffmpeg -i .\output.webm




Input #0, matroska,webm,
 ...
 Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv, progressive),
 ...



If I open output.webm with OBS it is shown correctly without a background, as shown in picture :



If I try to open it with OpenCV + FFMPEG it shows a black background under bars, as shown in picture :



This is how I load video in Qt :


cv::VideoCapture capture;
capture.open(filename, cv::CAP_FFMPEG);
capture.set(cv::CAP_PROP_CONVERT_RGB, false); // try forcing load alpha channel
... //in a thread
while (capture.read(frame)) {
 qDebug() << "c" << frame.channels() << "t" << frame.type() << "d" << frame.depth(); // output: c 3 t 16 d 0
 cv::cvtColor(frame, frame, cv::COLOR_BGR2RGBA); //useless since no alpha channel is detected
 img = QImage(frame.data, frame.cols, frame.rows, QImage::Format_RGBA8888);
 emit processedImage(img); // to show image in a QLabel with QPixmap::fromImage(img)
}



I think the problem is when I load the video with OpenCV, it doens't detect alpha channel, since I can load correctly in other player (obs, html5, etc.)


What I'm wrong with all process to show this video in Qt with transparency ?


EDIT : Added dropbox link with test video + ffmpeg outputs :
sample items


-
avdevice/dshow : list_devices : show media type(s) per device
21 décembre 2021, par Diederick Niehorsteravdevice/dshow : list_devices : show media type(s) per device
the list_devices option of dshow didn't indicate whether a specific
device provides audio or video output. This patch iterates through all
media formats of all pins exposed by the device to see what types it
provides for capture, and prints this to the console for each device.
Importantly, this now allows to find devices that provide both audio and
video, and devices that provide neither.Signed-off-by : Diederick Niehorster <dcnieho@gmail.com>
Reviewed-by : Roger Pack <rogerdpack2@gmail.com>