-
transmogrify.htmlcontentextractor-1.0-1.lbn13.noarch
Helpful transmogrifier blueprints to extract text or html out of html content.
transmogrify.htmlcontentextractor.auto
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This blueprint has a clustering algorithm that tries to automatically extract the content from the HTML template.
This is slow and not always effective. Often you will need to input your own template extraction rules.
In addition to extracting Title, Description and Text of items the blueprint will output
the rules it generates to a logger with the same name as the blueprint.
Setting debug mode on templateauto will give you details about the rules it uses. ::
...
DEBUG:templateauto:'icft.html' discovered rules by clustering on 'http://...'
Rules:
text= html //div[@id = "dal_content"]//div[@class = "content"]//p
title= text //div[@id = "dal_content"]//div[@class = "content"]//h3
Text:
TITLE: ...
MAIN-10: ...
MAIN-10: ...
MAIN-10: ...
Options
-------
condition
TAL Expression to control use of this blueprint
debug
default is ''
transmogrify.htmlcontentextractor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This blueprint extracts out title, description and body from html
either via xpath, TAL or by automatic cluster analysis
Rules are in the form of ::
(title|description|text|anything) = (text|html|optional|tal) Expression
Where expression is either TAL or XPath
For example ::
[template1]
blueprint = transmogrify.htmlcontentextractor
title = text //div[@class='body']//h1[1]
_delete1 = optional //div[@class='body']//a[@class='headerlink']
_delete2 = optional //div[contains(@class,'admonition-description')]
description = text //div[contains(@class,'admonition-description')]//p[@class='last']
text = html //div[@class='body']
Note that for a single template e.g. template1, ALL of the XPaths need to match otherwise
that template will be skipped and the next template tried. If you'd like to make it
so that a single XPath isn't nessary for the template to match then use the keyword `optional` or `optionaltext`
instead of `text` or `html` before the XPath.
When an XPath is applied within a single template, the HTML it matches will be removed from the page.
Another rule in that same template can't match the same HTML fragment.
If a content part is not useful (e.g. redundant text, title or description) it is a way to effectively remove that HTML
from the content.
To help debug your template rules you can set debug mode.
For more information about XPath see
- http://www.w3schools.com/xpath/default.asp
- http://blog.browsermob.com/2009/04/test-your-selenium-xpath-easily-with-firebug/
HTMLContentExtractor
====================
This blueprint extracts out fields from html either via xpath rules or by automatic cluster
analysis
transmogrify.htmlcontentextractor
---------------------------------
You can define a series of rules which will get applied to the to the '_text'
of the input item. The rules use a XPATH expression or a TAL expression to
extract html or text out of the html and adds it as key to the outputted item.
Each option of the blueprint is a rule of the following form ::
(N-)field = (optional)(text|html|delete|optional) xpath
OR
(N-)field = (optional)tal tal-expression
"field" is the attribute that will be set with the results of the xpath
"format" is what to do with the results of the xpath. "optional" means the same
as "delete" but won't cause the group to not match. if the format is delete or optional
then the field name doesn't matter but will still need to be unique
"xpath' is an xpath expression
If the format is 'tal' then instead of an XPath use can use a TAL expression. TAL expression
is evaluated on the item object AFTER the XPath expressions have been applied.
For example ::
[template]
blueprint = transmogrify.htmlcontentextractor
title = text //div[@class='body']//h1[1]
_permalink = text //div[@class='body']//a[@class='headerlink']
_text = html //div[@class='body']
_label = optional //p[contains(@class,'admonition-title')]
description = optional //div[contains(@class,'admonition-description')]/p[@class='last']/text()
_remove_useless_links = optional //div[@id = 'indices-and-tables']
mimetype = tal string:text/html
text = tal python:item['_text'].replace('id="blah"','')
You can delete a number of parts of the html by extracting content to fields such as _permalink and _label.
These items won't get used be set used to set any properties on the final content so are effective as a means
of deleting parts of the html.
TAL expressions are evaluated after XPath expressions so we can post process the _text XPath to produce a text
stripped of a certain id.
N is the group number. Groups are run in order of group number. If
any rule doesn't match (unless its marked optional) then the next group
will be tried instead. Group numbers are optional.
Instead of groups you can also chain several blueprints togeather. The blueprint
will set '_template' on the item. If another blueprint finds the '_template' key in an item
it will ignore that item.
The '_template' field is the remainder of the html once all the content selected by the
XPATH expressions have been applied.
transmogrify.htmlcontentextractor.auto
--------------------------------------
This blueprint will analyse the html and attempt to discover the rules to extract out the
title, description and body of the html.
If the logger output is in DEBUG mode then the XPaths used by the auto extrator will be output
to the logger.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13
-
transmogrify.pathsorter-1.0b4-2.lbn13.noarch
transmogrify.pathsorter is a blueprint for reordering items into tree sorted order
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13
-
transmogrify.ploneremote-1.3-2.lbn13.noarch
transmogrifier.ploneremote is package of transmogrifier blueprints for uploading content via Zope XML-RPC API to a Plone site.
Plone site does not need any modifications, but vanilla Zope XML-RPC is used.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13
-
transmogrify.print-0.5.0-1.lbn13.noarch
Transmogrifier blueprint to print pipeline item keys
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13
-
transmogrify.regexp-0.5.0-1.lbn13.noarch
transmogrify.regexp allows you to use regular expressions and format strings to search and replace key values in a transmogrifier pipeline.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13
-
transmogrify.siteanalyser-1.3-2.lbn13.noarch
Transmogrifier blueprints that look at how html items are linked to gather metadata about items.
transmogrify.siteanalyser.defaultpage
Determines an item is a default page for a container if it has many links to items in that container.
transmogrify.siteanalyser.relinker
Fix links in html content. Previous blueprints can adjust the '_path' and set the original path to '_origin' and relinker will fix all the img and href links. It will also normalize ids.
transmogrify.siteanalyser.attach
Find attachments which are only linked to from a single page. Attachments are merged into the linking item either by setting keys or moving it into a folder.
transmogrify.siteanalyser.title
Determine the title of an item from the link text used.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13
-
transmogrify.sqlalchemy-1.0.1-2.lbn13.noarch
Feed data from SQLAlchemy into a transmogrifier pipeline
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13
-
transmogrify.webcrawler-1.2.1-2.lbn13.noarch
A source blueprint for crawling content from a site or local html files.
Webcrawler imports HTML either from a live website, for a folder on disk, or a folder on disk with html which used to come from a live website and may still have absolute links refering to that website.
To crawl a live website supply the crawler with a base http url to start crawling with. This url must be the url which all the other urls you want from the site start with.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13
-
transmogrify.xmlsource-1.0-2.lbn13.noarch
Simple xml reader for a transmogrifier pipeline
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13
-
webcouturier.dropdownmenu-2.3.1-2.lbn13.noarch
Overview
You will get the dropdown menus for those items in global navigation that have
the subitems. Submenus are build based on the same policy as the Site Map, so
it will show the same tree as you would get in the Site Map or navigation portlet
being in appropriate section. Requires plone.browserlayer to be installed in your
site.
How it works
Dropdown menus are build based on the same policy as the Site Map, so it will show
the same tree as you would get in the Site Map or navigation portlet being in
appropriate section. This means - no private objects for anonymouses; no objects,
excluded from the navigation - exactly the same behavior you would expect from Site
Map or navigation portlet.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 13