INTERNATIONAL INNOVATION SCHOOL, AUSTRALIA [2026]
ETHICAL AI FOR RESEARCH AND PUBLISHING
INVITED SPEAKERS
ORGANIZING TEAM
PROGRAM DETAILS
NanoTRIZ Summer/Winter Innovation School
Applied AI for Research & Publishing — TBA
An intensive, hands-on program on the practical and ethical use of AI across the research lifecycle. Participants learn interoperable, standards-aligned workflows that accelerate literature discovery, strengthen analysis and reproducibility, and improve scholarly communication (including video-based publications). Delivery is hybrid (online + onsite). 12–15 PD/CPD hours are documented on the certificate (recognition subject to the policies of the participant’s home institution).
Sessions at a Glance
-
S1 — AI for Discovery & Ideation (Days 1–2)
-
S2 — AI for Data, Modelling & Reproducibility (Day 3)
-
S3 — AI for Writing, Publishing & Video Science (Day 4)
Format throughout: tutorials, method talks, and workflow demonstrations with extended Q&A (no posters).
Venue & Schedule
-
Venue: TBA
-
Dates: TBA
-
Time zone: TBA
-
Audience: Professors/PIs; Postdocs; PhD Candidates; R&D/Industry; Government/Policy; Editors/Publishers
Speaking / Presenting Policy
-
Presenting requires abstract acceptance (talk, tutorial, method, or demo). Submission does not guarantee a slot.
-
Accepted authors are notified by email and must register under the applicable category.
-
Non-presenting participants may attend (space permitting).
-
Any associated publications are assessed independently by the journal according to its Peer-Review Policy (single-blind; at least two independent reviewers). See: /policies/peer-review.
Program Details (final agenda to follow)
Plenary (TBA): AI Across the Research Lifecycle — advances, limits, open challenges; standards, policies, and practical impact.
Session 1 (Discovery & Ideation) — Focus: literature intelligence; knowledge mapping; hypothesis generation; project scoping.
Examples: semantic/graph search; citation networks; topic clustering; living knowledge bases; AI-assisted systematic reviews and registered protocols; prompt safeguards; feasibility-aware ideation; preregistration; FAIR metadata; automated provenance.
Session 2 (Data, Modelling & Reproducibility) — Focus: reproducible analysis; uncertainty-aware modelling; auditable automation.
Examples: notebooks, containers, and CI; environment capture; data versioning; experiment tracking; DOE/Bayesian optimisation; uncertainty quantification; baseline validation; ablation/robustness; responsible synthetic data; transparent reporting (including negative results).
Session 3 (Writing, Publishing & Video Science) — Focus: ethical authorship; transparent AI assistance; visual communication; video-based scholarly records.
Examples: responsible LLM use; attribution and CRediT roles; ORCID; version control; AI-use disclosure; plagiarism/AI-text checks (ICMJE/WAME/COPE alignment); data/code availability; accessible figures; methods capture; narration standards; audit trails; DOI-linked data and analysis.
Call for Contributions
-
Eligibility: Professors/PIs, Postdocs, PhD Candidates
-
Tracks: S1 Discovery & Ideation • S2 Data & Reproducibility • S3 Writing & Publishing
-
Modes: Oral • Tutorial • Method • Demo (no posters)
-
Abstract: up to 300 words + optional one figure/schematic; include 3–5 keywords, preferred mode, and a short bio
-
Slides: 16:9; label units and uncertainties; disclose data/code availability
-
Demonstrations (optional): indicate AV/power/safety needs in advance
Key Dates
-
Abstract deadline: TBA
-
School & symposium dates: TBA
-
Review model: single-blind; selection based on methodological clarity, relevance, alignment with standards and reproducibility, and cross-disciplinary impact
-
Full paper: not required to present (abstract + short bio suffice)
Mentorship
Lead: Professor Alexander Solovev (former Harvard academic; Guinness World Record in Nanoscience; Australian Global Talent; IOP Emerging Leader; DSM Award; Humboldt & 1000 Talent Fellow).
Roles & Teaming
-
Professors: senior mentors/co-PIs; peer feedback; scope and ethics oversight
-
Postdocs: project leads; rigour, reproducibility, figure/data integrity
-
PhD: drafting and analysis; literature maps; references
-
Undergraduates: literature curation; figures/storyboards/video tasks; data tidying
-
All participants: follow ethical AI rules (disclosure, attribution, no fabrication; human authors remain responsible)
Governance and Policies
Organiser: NanoTRIZ Innovation Institute (Brisbane, Australia).
Publishing partner: SciViD — The Publisher of Video Science www.scividjournal.com (open access; independent Editorial Board).
Editorial independence: organisers and sponsors do not influence editorial decisions. See: /about/editorial-board.
Policies: Ethics & Integrity (COPE/WAME/ICMJE), Open Access & Licensing (CC BY 4.0), Data/Code Availability, Privacy, Accessibility. See: /policies/ethics, /policies/open-access, /policies/data-code, /policies/privacy, /policies/accessibility.
Optional Capstone — Indicative 10-Day Co-Authorship Sprint (TBA)
Flow: orientation & ethics → collaboration charter → literature intelligence & gap discovery → method storyboards & roles → optional cultural/networking day → AI-assisted outlining/drafting with provenance capture → draft sprints with mentor drop-ins → partner meetups & collaboration planning → visuals, video-abstract & editorial readiness → peer-review circle & closing.
Deliverable: a near-final manuscript or a video-abstract + text package (with cover-letter draft). Deliverables must include: author CRediT roles, AI-use disclosure, and data/code availability statements aligned with journal policies.
PD/CPD & Certification
-
Recorded live contact hours: 12–15 (documented on the certificate).
-
Certificate criteria: attend ≥75% of live online contact hours (attendance logs kept) and submit a capstone (draft manuscript or video-abstract + text) with a submission plan.
-
Certificates are issued by the NanoTRIZ Innovation Institute; recognition depends on the participant’s home-institution policy.
Registration & Publication Model
Organizer: NanoTRIZ Innovation Institute
Publishing partner: SciViD — The Publisher of Video Science (open access; independent Editorial Board)
Editorial independence: organisers, sponsors, and partners do not influence editorial decisions. See: /about/editorial-board.
Policies: Peer Review (single-blind; ≥2 reviewers), Ethics & Integrity (COPE/WAME/ICMJE), Open Access & Licensing (CC BY 4.0), Data/Code, Privacy, Accessibility. See: /policies/peer-review, /policies/ethics, /policies/open-access, /policies/data-code, /policies/privacy, /policies/accessibility.
Registration Categories & Fees
Presenting authors (accepted abstracts):
-
Students (Undergrad/Master’s/PhD): Online $200 | Onsite $400
-
Researchers (Postdoc/Faculty/R&D/Editors/Publishers/Government): Online $300 | Onsite $500
Non-presenting participants (attendance only):
-
Students: Online $250 | Onsite $450
-
Researchers: Online $350 | Onsite $550
Discounts: Early-bird –$50 (deadline TBA); 10% group rate for 5+ from one institution (same ticket type; one discount per registration).
Waivers/discounts: may be granted for students and authors from low- and middle-income countries or in cases of financial hardship.
Note: Registration fees cover event participation and do not affect abstract selection or editorial outcomes.
What Your Fee Includes
Access to live sessions, Q&A, and training materials (recordings where offered). Onsite tickets include coffee/networking breaks; any cultural/social activities are self-funded.
Publication (optional; APC waived for Special Collection)
Submit a Short Communication, Methods/Workflow Note, or Video Science Paper to the SciViD Summer School & Methods Symposium Special Collection. If accepted after independent peer review, the article receives a Crossref DOI and is published open access at no cost to authors for this Collection (APC waived).
-
Licensing: unless otherwise stated, accepted items are published under CC BY 4.0 (text and video).
-
Persistent Identifiers: authors are encouraged to include ORCID iDs and institutional ROR IDs in submissions.
-
Preservation & Interoperability: long-term preservation via PKP PN/LOCKSS; metadata exposed through OAI-PMH for indexing and discovery (endpoint: /oai).
-
No submission or review fees. Editorial decisions are made solely by SciViD’s Editorial Board per the Peer-Review and Ethics policies.
Invitation Letters & Admin Documentation
Official invitation letters may be issued after abstract acceptance (and paid registration, where applicable) to support institutional reimbursement and visa documentation for the host country (TBA). Tax invoices/receipts are provided.
Practical Information
-
Accessibility: WCAG-aligned materials; captions for recorded sessions; reasonable accommodations on request. Please indicate mobility/hearing/visual or dietary needs during registration.
-
Code of Conduct: respectful, inclusive engagement; zero tolerance for harassment or discrimination.
-
Sustainability: digital programme by default; venues with public-transport access where possible; reusable bottles/cups encouraged.
-
Industry participation: sponsors and tooling providers welcome; industry talks require registration unless part of a sponsorship; limited slots. Sponsorship does not influence program selection or editorial decisions.
-
Format flexibility: planned single-track; may expand to multi-track with Special Session Chairs if demand is high.
-
AV & demonstrations: standard AV (16:9 projection; lectern + handheld mics; HDMI/USB-C); tech checks scheduled; limited demo space/power (local voltage TBA); notify safety needs in advance.
-
Recording: sessions may be recorded for archive/education; presenters may opt out of recording their own talk by notifying organisers in advance.
-
Industrial Advisory Board: leaders in AI research tooling and research-driven industries provide strategic guidance and collaboration pathways (non-editorial).
Refund & Deferral
Online participation is refundable up to 7 days before start (written request by email). Onsite tickets are generally non-refundable due to venue/logistics but may be deferred to a future cohort or transferred to online. Presenter substitution from the same institution is allowed up to 72 hours before the event. If the event is cancelled or materially changed by the organiser, eligible attendees are entitled to a refund consistent with Australian Consumer Law.
Terms, Privacy & Contact
Data are used only for event administration; sponsor visibility is opt-in; no sale of personal data. We comply with applicable privacy laws, including the Australian Privacy Principles (APPs). Health & safety follows venue/local regulations; attendees should carry appropriate insurance. Governing law: TBA.
Contact: founder@nanotriz.com Phone: +61 494 042 578
Organizer: NanoTRIZ Innovation Institute (location TBA; ABN [insert]).
Abstract Submission
Email up to 300 words (optional 1 figure, 3–5 keywords, preferred mode, short bio) to founder@nanotriz.com with subject:
“NanoTRIZ Innovation School & Methods Symposium — Abstract.”
Please include ORCID iDs, institutional ROR IDs, and a brief AI-use disclosure (if applicable).
CALL FOR SUBMISSION — SciViD SPECIAL COLLECTION
COMPANIES ARE INVITED TO PRESENT NEW AI TOOLS

AI THEMATIC TEAM INVENTION ENGINE
Unlike general chat assistants, this engine is purpose-built to accelerate innovation. It ingests licensed/open scholarly and patent corpora, constructs topic graphs, and detects under-explored linkages relative to prior art. Combining semantic mapping, IP landscape analysis, and inventive principles, it produces hypothesis-ready research blueprints:
(1) concise gap statements and opportunity maps,
(2) candidate mechanisms and concept variants,
(3) minimal viable experiments, data needs and risks, and
(4) IP/readiness notes (prior-art clusters, provisional freedom-to-operate signals, suggested collaborators/equipment).
All outputs carry provenance trails (citations, time stamps, confidence tags) so teams can verify and adapt them.
Ethics & limitations. Sources respect publisher terms and user licences.
Results do not guarantee novelty, patentability, or acceptance — expert review and validation are required.

CROSS-DOMAIN DISCOVERY WITH AN AI TOOLCHAIN
A systematic AI toolchain can support the entire research arc — from question to validated result — by chaining complementary tools rather than relying on any single model. Semantic search and citation-graphing map the literature and patents; claim extraction and trend analysis surface gaps.
Analogy and idea engines propose candidate mechanisms and variants. Active-learning and Bayesian design tools suggest minimal viable experiments, while code notebooks and AutoML assist with analysis under version control for full reproducibility.
Large-language models help draft and revise with provenance, contribution tracking, and disclosure; visualization tools convert data into clear figures; IP-landscape scanners flag prior-art clusters; collaborator/equipment matchers identify feasible partners. At each step, outputs carry citations, assumptions, and confidence tags for human review — yielding evidence-linked, testable hypotheses rather than unchecked recommendations.
Systematic AI Across the Research Phase - Problem Framing
Use semantic discovery and citation-network mapping to reveal how a field is organised, which concepts co-occur, and where discourse is thin. Topic modelling and trend analysis prioritise questions with momentum, while standards/ethics scanners surface relevant reporting norms, data formats, and constraints that should shape the study from the outset.
Evidence Synthesis & Landscape Mapping
Deploy literature and patent miners to extract claims, methods, datasets, and contradictions. Deduplication and quality scoring reduce noise; knowledge-graph builders link entities (materials, variables, readouts) so gaps and potential “white spaces” become explicit. The result is a citable evidence landscape with confidence tags — not a heap of PDFs.
Hypothesis Generation & Study Design
Apply analogy engines and TRIZ-inspired heuristics to propose mechanism variants and control conditions consistent with the mapped evidence. Design assistants turn these into structured study plans — assumptions, variables, measurable outcomes, and acceptance criteria — so downstream stages inherit a clear rationale and test plan.
Experiment Planning & Optimisation
Use active-learning and Bayesian-optimisation planners to suggest minimal yet informative runs, updating recommendations as results arrive. Power-analysis and sensitivity tools right-size the design; feasibility checkers align proposed runs with instrument limits and sample budgets. The goal is fewer, more decisive experiments—not brute-force search.
Data Capture & Management
Adopt electronic lab notebooks/LIMS integrations to attach rich metadata, environment snapshots, and versioned code to each dataset. Automated QC detects outliers, missingness, drift, and sensor faults at ingest. FAIR-aligned templates keep data and methods findable and reusable across the team and over time.
Analysis & Inference
Use validated statistical/ML pipelines with built-in uncertainty quantification and robustness checks (cross-validation plans, ablation, leakage detection). Provenance graphs trace every result back to the exact code, parameters, and inputs, enabling one-click regeneration, auditability, and stress-testing of conclusions.
Visualisation for Insight
Leverage visualisation assistants to recommend chart types matched to data/claims, test for perceptual pitfalls, and generate accessible figures with alt text and colour-blind-safe palettes. Storyboarding tools align plots with hypotheses and acceptance criteria, reducing rework and keeping interpretation faithful to the data.
Integrity, Reproducibility & Compliance
Run originality checks on protocols, statistical-reporting audits, and data/ethics confirmations before any external sharing. Environment capture, checksumed datasets, and containerised pipelines make reruns deterministic. Risk/ethics modules flag safety, privacy, or animal/human-research compliance issues early, with change-logs for every deviation.
Collaboration & Resource Matching
Use collaborator/equipment matchers to identify labs, instruments, and datasets that fit the design constraints (e.g., throughput, resolution, field strengths). Task orchestration tools coordinate roles, timelines, and hand-offs so each stage starts with the artefacts it needs and nothing falls between the cracks.
Continuous Learning Loop
Monitoring agents watch preprints, patents, and datasets mid-study; if new evidence shifts priors or reveals pitfalls, the system proposes design tweaks and updated experiment queues. Lessons learned — successful or not — are captured as structured knowledge, shortening the next cycle from question to validated result.
In-Silico Simulation & Digital Twins
Leverage physics-informed machine learning and surrogate models to emulate complex systems, enabling rapid virtual experiments before wet-lab work. Digital twins produce synthetic datasets for sensitivity analyses, probe boundary limits, and stress-test hypotheses under real-world constraints.
Systematic AI Across Academic Drafting & Publication
Planning & Scoping
Use scoping assistants to translate your research aims into a structured brief: target audience, core claims, required evidence, and likely venues. Policy scanners surface relevant reporting guidelines (e.g., CONSORT, PRISMA, ARRIVE), data-sharing expectations, and ethics statements so drafting starts aligned with journal norms—not retrofitted at the end.
Outline & Argument Architecture
Outline generators turn the brief into a hierarchical plan (title → claims → sections → paragraphs → figures). Argument-mapping tools test logical flow, label premises vs. evidence, and highlight gaps or redundant sections. This produces a stable “story spine” that guides all contributors and prevents scope creep.
Evidence-Grounded Drafting
Citation-aware drafting assistants insert and format references as you write, link sentences to sources, and warn if a claim lacks support. Retrieval-augmented writing keeps facts anchored to verified passages (with page/line pointers), reducing hallucinations and saving hours of manual cross-checking.
Language, Tone & Readability
Style controllers adapt tone to the venue (technical, clinical, methods-first), maintain term consistency, and enforce plain-language summaries where needed. Readability and bias detectors flag overly complex sentences, hedging, or unintended claims, while multilingual aids support authors writing in a non-native language—without changing scientific meaning.
Referencing & Citation Management
Reference managers with AI import, deduplicate, and normalise records; format conversions (APA/IEEE/Vancouver) and journal-specific styles are applied automatically. In-text citation auditors catch mismatches, missing DOIs, broken links, and retracted sources; reference-graph tools propose must-cite prior art to strengthen positioning.
Figures, Tables & Visual Narratives
Figure planners align each visual to a specific claim and dataset, propose appropriate chart types, and generate draft captions that state the finding, method, and uncertainty. Table builders auto-check units, significant figures, and footnotes; accessibility checks add alt text and ensure colour-contrast compliance.
Originality, Attribution & Integrity
Originality pipelines combine paraphrase-risk analysis, citation coverage checks, and similarity screening to reduce (not “guarantee zero”) plagiarism risk. Attribution helpers insert quotation markers where verbatim text is appropriate and prompt for citations on close paraphrases. Provenance logs preserve prompts, sources, and model versions so AI use is disclosed and human authorship remains accountable.
Authorship, Contributions & Compliance
Contribution trackers (e.g., CRediT-style roles) map who did conceptualisation, methods, analysis, drafting, and supervision. Checklists verify conflicts of interest, funding statements, data/code availability, and ethics approvals. An “AI-use” note is auto-generated, distinguishing tool assistance from intellectual authorship in line with contemporary editorial policies.
Journal Fit & Formatting
Venue-matchers score fit based on scope, recent topics, typical article length, and methodological expectations. Template kits then conform the manuscript to house style: section order, heading levels, reference style, word/figure limits, and graphical abstract specs—preventing desk rejections due to formatting.
Submission, Peer Review & Revision
Submission packagers assemble cover letters, highlights, author statements, and repository metadata; they validate file types, figure DPI, and anonymisation for double-blind review. During peer review, response assistants align point-by-point replies to reviewer comments with tracked changes, regenerate affected figures/tables from source code, and maintain a clean audit trail across versions — shortening cycles and preserving scientific fidelity.
Guiding principle: at every stage, AI augments — not replaces — scholarly judgment. Outputs carry citations, assumptions, and confidence so editors and reviewers can verify the chain of evidence, while authors retain control over interpretation, originality, and accountability.









